Skip to content

Commit 49eedcf

Browse files
williamwen42Svetlana Karslioglu
authored and
Svetlana Karslioglu
committed
remove speedup numbers
1 parent 5fcf772 commit 49eedcf

File tree

1 file changed

+2
-4
lines changed

1 file changed

+2
-4
lines changed

intermediate_source/torch_compile_tutorial.py

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -176,8 +176,7 @@ def evaluate(mod, inp):
176176

177177
######################################################################
178178
# And indeed, we can see that running our model with ``torch.compile``
179-
# results in a significant speedup. On an NVIDIA A100 GPU, we observe a
180-
# ~1.5x speedup. Speedup mainly comes from reducing Python overhead and
179+
# results in a significant speedup. Speedup mainly comes from reducing Python overhead and
181180
# GPU read/writes, and so the observed speedup may vary on factors such as model
182181
# architecture and batch size. For example, if a model's architecture is simple
183182
# and the amount of data is large, then the bottleneck would be
@@ -234,8 +233,7 @@ def train(mod, data):
234233
######################################################################
235234
# Again, we can see that ``torch.compile`` takes longer in the first
236235
# iteration, as it must compile the model, but in subsequent iterations, we see
237-
# significant speedups compared to eager. On an NVIDIA A100 GPU, we
238-
# observe a ~1.8x speedup.
236+
# significant speedups compared to eager.
239237

240238
######################################################################
241239
# Comparison to TorchScript and FX Tracing

0 commit comments

Comments
 (0)