Skip to content

Commit e5bf61a

Browse files
committed
add comment on using torch.utils.benchmark
1 parent 026a88e commit e5bf61a

File tree

2 files changed

+6
-0
lines changed

2 files changed

+6
-0
lines changed

intermediate_source/torch_compile_tutorial.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -184,6 +184,9 @@ much as possible, and so we chose ``"reduce-overhead"``. For your own models,
184184
you may need to experiment with different modes to maximize speedup. You can
185185
read more about modes `here <https://pytorch.org/get-started/pytorch-2.0/#user-experience>`__.
186186

187+
We also note that for general PyTorch benchmarking, we recommend using ``torch.utils.benchmark``.
188+
We wrote our own timing functions in this tutorial to show ``torch.compile``'s compilation latency.
189+
187190
Now, let's consider comparing training.
188191

189192
.. code-block:: python

intermediate_source/torch_compile_tutorial_.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -187,6 +187,9 @@ def evaluate(mod, inp):
187187
# you may need to experiment with different modes to maximize speedup. You can
188188
# read more about modes `here <https://pytorch.org/get-started/pytorch-2.0/#user-experience>`__.
189189
#
190+
# We also note that for general PyTorch benchmarking, we recommend using ``torch.utils.benchmark``.
191+
# We wrote our own timing functions in this tutorial to show ``torch.compile``'s compilation latency.
192+
#
190193
# Now, let's consider comparing training.
191194

192195
model = init_model()

0 commit comments

Comments
 (0)