Skip to content

Commit a0b9760

Browse files
committed
spellcheck
1 parent ea1cbe6 commit a0b9760

File tree

2 files changed

+5
-3
lines changed

2 files changed

+5
-3
lines changed

en-wordlist.txt

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,6 +82,7 @@ FX's
8282
FairSeq
8383
Fastpath
8484
FakeTensor
85+
FakeTensors
8586
FFN
8687
FloydHub
8788
FloydHub's
@@ -240,6 +241,7 @@ Sohn
240241
Spacy
241242
SwiGLU
242243
SymInt
244+
SymInts
243245
TCP
244246
THP
245247
TIAToolbox
@@ -371,6 +373,7 @@ downsamples
371373
dropdown
372374
dtensor
373375
dtype
376+
dtypes
374377
duration
375378
elementwise
376379
embeddings
@@ -655,7 +658,6 @@ RecSys
655658
TorchRec
656659
sharding
657660
TBE
658-
dtype
659661
EBC
660662
sharder
661663
hyperoptimized

intermediate_source/torch_export_tutorial.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -639,7 +639,7 @@ def forward(self, x, y):
639639
# with parts of user code where compilation relies on data values. In short, if the compiler requires a concrete, data-dependent value in order to proceed, it will error out, complaining that
640640
# FakeTensor tracing isn't providing the information required.
641641
#
642-
# Data-depdenent values appear in many places, and common sources are calls like ``item()``, ``tolist()``, or ``torch.unbind()`` that extract scalar values from tensors.
642+
# Data-dependent values appear in many places, and common sources are calls like ``item()``, ``tolist()``, or ``torch.unbind()`` that extract scalar values from tensors.
643643
# How are these values represented in the exported program? In the `Constraints/Dynamic Shapes <https://pytorch.org/tutorials/intermediate/torch_export_tutorial.html#constraints-dynamic-shapes>`_
644644
# section, we talked about allocating symbols to represent dynamic input dimensions.
645645
# The same happens here: we allocate symbols for every data-dependent value that appears in the program. The important distinction is that these are "unbacked" symbols or "unbacked SymInts",
@@ -687,7 +687,7 @@ def forward(self, x, y):
687687

688688
######################################################################
689689
# Here we actually need the "hint", or the concrete value of ``a`` for the compiler to decide whether to trace ``return y + 2`` or ``return y * 5`` as the output.
690-
# Because we trace with FakeTensors, we don't know what ``a // 2 >= 5`` actually evaluates to, and export errors out with "Could not guard on data-dependent expression ``u0 // 2 >= 5`` (unhinted)".
690+
# Because we trace with FakeTensors, we don't know what ``a // 2 >= 5`` actually evaluates to, and export errors out with "Could not guard on data-dependent expression ``u0 // 2 >= 5 (unhinted)``".
691691
#
692692
# So how do we actually export this? Unlike ``torch.compile()``, export requires full graph compilation, and we can't just graph break on this. Here's some basic options:
693693
#

0 commit comments

Comments
 (0)