5
5
6
6
**Author**: `Thiago Crepaldi <https://github.com/thiagocrepaldi>`_
7
7
8
- .. Note ::
8
+ .. note ::
9
9
As of PyTorch 2.1, there are two versions of ONNX Exporter.
10
10
11
11
* ``torch.onnx.dynamo_export`is the newest (still in beta) exporter based on the TorchDynamo technology released with PyTorch 2.0
34
34
# 4. Visualize the ONNX model graph using `Netron <https://github.com/lutzroeder/netron>`_.
35
35
# 5. Execute the ONNX model with `ONNX Runtime`
36
36
#
37
- # Note that because the ONNX exporter uses ``onnx`` and ``onnxscript`` to translate PyTorch operators into ONNX operators,
37
+ # Because the ONNX exporter uses ``onnx`` and ``onnxscript`` to translate PyTorch operators into ONNX operators,
38
38
# we will need to install them.
39
39
# %%
40
40
# .. code-block:: bash
@@ -85,7 +85,6 @@ def forward(self, x):
85
85
# As we can see, we didn't need any code change on our model.
86
86
# The resulting ONNX model is saved within ``torch.onnx.ExportOutput`` as a binary protobuf file.
87
87
# The exporter uses static shapes by default, so the resulting model has static dimensions.
88
- # In a future tutorial we are going to explore how to leverage dynamic shapes and other advanced features.
89
88
#
90
89
# We can save it to disk with the following code:
91
90
@@ -98,7 +97,7 @@ def forward(self, x):
98
97
# .. image:: ../_static/img/onnx/netron_web_ui.png
99
98
#
100
99
# Once Netron is open, we can drag and drop our ``my_image_classifier.onnx`` file into the browser or select it after
101
- # clicking on ` Open model` button.
100
+ # clicking the ** Open model** button.
102
101
#
103
102
# .. image:: ../_static/img/onnx/image_clossifier_onnx_modelon_netron_web_ui.png
104
103
#
@@ -117,7 +116,7 @@ def forward(self, x):
117
116
print (f"Input length: { len (onnx_input )} " )
118
117
print (f"Sample input: { onnx_input } " )
119
118
120
- # in our example, the input is the same, but we can have more inputs
119
+ # In our example, the input is the same, but we can have more inputs
121
120
# than the original PyTorch model in more complex cases.
122
121
# Now we can execute the ONNX model with ONNX Runtime.
123
122
@@ -134,7 +133,7 @@ def to_numpy(tensor):
134
133
return tensor .detach ().cpu ().numpy () if tensor .requires_grad else tensor .cpu ().numpy ()
135
134
136
135
# ONNX Runtime also requires the input to be a dictionary with
137
- # the keys being the input name and the value the Numpy tensor
136
+ # the keys being the input name and the value the Numpy tensor.
138
137
139
138
onnxruntime_input = {k .name : to_numpy (v ) for k , v in zip (ort_session .get_inputs (), onnx_input )}
140
139
@@ -143,7 +142,7 @@ def to_numpy(tensor):
143
142
onnxruntime_outputs = ort_session .run (None , onnxruntime_input )
144
143
145
144
# The output can be a single tensor or a list of tensors, depending on the model.
146
- # Let's execute the PyTorch model and use it as benchmark next
145
+ # Let's execute the PyTorch model and use it as benchmark next.
147
146
torch_outputs = torch_model (torch_input )
148
147
149
148
# We need to adapt the PyTorch output format to match ONNX's
@@ -158,5 +157,9 @@ def to_numpy(tensor):
158
157
print (f"Output length: { len (onnxruntime_outputs )} " )
159
158
print (f"Sample output: { onnxruntime_outputs } " )
160
159
160
+ # Conclusion
161
+ # ----------
162
+
161
163
# That is about it! We have successfully exported our PyTorch model to ONNX format,
162
- # saved it to disk, executed it with ONNX Runtime and compared its result with PyTorch's.
164
+ # saved the model to disk, viewed it using Netron, executed it with ONNX Runtime
165
+ # and finally compared its numerical results with PyTorch's.
0 commit comments