1
1
"""
2
2
PyTorch Profiler With TensorBoard
3
3
====================================
4
- This recipe demonstrates how to use PyTorch Profiler
4
+ This tutorial demonstrates how to use TensorBoard plugin with PyTorch Profiler
5
5
to detect performance bottlenecks of the model.
6
6
7
- .. note::
8
- PyTorch 1.8 introduces the new API that will replace the older profiler API
9
- in the future releases. Check the new API at `this page <https://pytorch.org/docs/master/profiler.html>`__.
10
-
11
7
Introduction
12
8
------------
13
9
PyTorch 1.8 includes an updated profiler API capable of
14
10
recording the CPU side operations as well as the CUDA kernel launches on the GPU side.
15
11
The profiler can visualize this information
16
12
in TensorBoard Plugin and provide analysis of the performance bottlenecks.
17
13
18
- In this recipe , we will use a simple Resnet model to demonstrate how to
19
- use profiler to analyze model performance.
14
+ In this tutorial , we will use a simple Resnet model to demonstrate how to
15
+ use TensorBoard plugin to analyze model performance.
20
16
21
17
Setup
22
18
-----
@@ -98,7 +94,7 @@ def train(data):
98
94
#
99
95
# - ``schedule`` - callable that takes step (int) as a single parameter
100
96
# and returns the profiler action to perform at each step;
101
- # In this example with wait=1, warmup=1, active=5,
97
+ # In this example with `` wait=1, warmup=1, active=5`` ,
102
98
# profiler will skip the first step/iteration,
103
99
# start warming up on the second,
104
100
# record the following five iterations,
@@ -108,7 +104,7 @@ def train(data):
108
104
# During ``warmup`` steps, the profiler starts profiling as warmup but does not record any events.
109
105
# This is for reducing the profiling overhead.
110
106
# The overhead at the beginning of profiling is high and easy to bring skew to the profiling result.
111
- # During ``active`` steps, the profiler works and record events.
107
+ # During ``active`` steps, the profiler works and records events.
112
108
# - ``on_trace_ready`` - callable that is called at the end of each cycle;
113
109
# In this example we use ``torch.profiler.tensorboard_trace_handler`` to generate result files for TensorBoard.
114
110
# After profiling, result files will be saved into the ``./log/resnet18`` directory.
@@ -124,7 +120,7 @@ def train(data):
124
120
if step >= 7 :
125
121
break
126
122
train (batch_data )
127
- prof .step ()
123
+ prof .step () # Need call this at the end of each step to notify profiler of steps' boundary.
128
124
129
125
130
126
######################################################################
@@ -228,15 +224,14 @@ def train(data):
228
224
# .. image:: ../../_static/img/profiler_trace_view2.png
229
225
# :scale: 25 %
230
226
#
231
- # From the above view, we can find the event of ``enumerate(DataLoader)`` is shortened ,
227
+ # From the above view, we can see that the runtime of ``enumerate(DataLoader)`` is reduced ,
232
228
# and the GPU utilization is increased.
233
229
234
230
######################################################################
235
231
# Learn More
236
232
# ----------
237
233
#
238
- # Take a look at the following recipes/tutorials to continue your learning:
234
+ # Take a look at the following documents to continue your learning:
239
235
#
240
236
# - `Pytorch TensorBoard Profiler github <https://github.com/pytorch/kineto/tree/master/tb_plugin>`_
241
- # - `Pytorch Profiler <https://pytorch.org/tutorials/recipes/recipes/profiler_recipe.html>`_
242
- # - `Profiling Your Pytorch Module <https://pytorch.org/tutorials/beginner/profiler.html>`_ tutorial
237
+ # - `torch.profiler API <https://pytorch.org/docs/master/profiler.html>`_
0 commit comments