You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _posts/2022-3-10-running-pytorch-models-on-jetson-nano.md
+18-15Lines changed: 18 additions & 15 deletions
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
layout: blog_detail
3
3
title: 'Running PyTorch Models on Jetson Nano'
4
-
author: Team PyTorch
4
+
author: Jeff Tang, Hamid Shojanazeri, Geeta Chauhan
5
5
featured-img: ''
6
6
---
7
7
@@ -129,7 +129,10 @@ Although Jetson Inference includes models already converted to the TensorRT engi
129
129
### Using TensorRT
130
130
[TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/) is a high-performance inference framework from Nvidia. Jetson Nano supports TensorRT via the Jetpack SDK, included in the SD Card image used to set up Jetson Nano. To confirm that TensorRT is already installed in Nano, `run dpkg -l|grep -i tensorrt`:
Theoretically, TensorRT can be used to “take a trained PyTorch model and optimize it to run more efficiently during inference on an NVIDIA GPU.” Follow the instructions and code in the [notebook](https://github.com/NVIDIA/TensorRT/blob/master/quickstart/IntroNotebooks/4.%20Using%20PyTorch%20through%20ONNX.ipynb) to see how to use PyTorch with TensorRT through ONNX on a torchvision Resnet50 model:
135
138
@@ -179,15 +182,15 @@ You can also use the docker image described in the section *Using Jetson Inferen
179
182
180
183
The official [YOLOv5](https://github.com/ultralytics/yolov5) repo is used to run the PyTorch YOLOv5 model on Jetson Nano. After logging in to Jetson Nano, follow the steps below:
181
184
182
-
1. Get the repo and install what’s required:
185
+
1. Get the repo and install what’s required:
183
186
184
187
```
185
188
git clone https://github.com/ultralytics/yolov5
186
189
cd yolov5
187
190
pip install -r requirements.txt
188
191
```
189
192
190
-
2. Run `python3 detect.py`, which by default uses the PyTorch yolov5s.pt model. You should see something like:
193
+
2. Run `python3 detect.py`, which by default uses the PyTorch yolov5s.pt model. You should see something like:
Using the same test files used in the PyTorch iOS YOLOv5 demo app or Android YOLOv5 demo app, you can compare the results generated with running the YOLOv5 PyTorch model on mobile devices and Jetson Nano:
225
228
226
229
<divstyle="display: flex">
227
-
<imgsrc="/assets/images/sota/blog-2022-3-10-using-pytorch-1.png"alt="PyTorch YOLOv5 on Jetson Nano, example with a dog"width="50%">
228
-
<imgsrc="/assets/images/sota/blog-2022-3-10-using-pytorch-2.png"alt="PyTorch YOLOv5 on Jetson Nano, example with a horse and a rider"width="50%">
230
+
<imgsrc="{{ site.baseurl }}/assets/images/sota/blog-2022-3-10-using-pytorch-1.png"alt="PyTorch YOLOv5 on Jetson Nano, example with a dog"width="50%">
231
+
<imgsrc="{{ site.baseurl }}/assets/images/sota/blog-2022-3-10-using-pytorch-2.png"alt="PyTorch YOLOv5 on Jetson Nano, example with a horse and a rider"width="50%">
229
232
</div>
230
233
**Figure 1**. *PyTorch YOLOv5 on Jetson Nano*.
231
234
232
235
<divstyle="display: flex">
233
-
<imgsrc="/assets/images/sota/blog-2022-3-10-using-pytorch-3.png"alt="PyTorch YOLOv5 on iOS, example with a dog"width="50%">
234
-
<imgsrc="/assets/images/sota/blog-2022-3-10-using-pytorch-4.png"alt="PyTorch YOLOv5 on iOS, example with a horse and a rider"width="50%">
236
+
<imgsrc="{{ site.baseurl }}/assets/images/sota/blog-2022-3-10-using-pytorch-3.png"alt="PyTorch YOLOv5 on iOS, example with a dog"width="50%">
237
+
<imgsrc="{{ site.baseurl }}/assets/images/sota/blog-2022-3-10-using-pytorch-4.png"alt="PyTorch YOLOv5 on iOS, example with a horse and a rider"width="50%">
235
238
</div>
236
239
**Figure 2**. *PyTorch YOLOv5 on iOS*.
237
240
238
241
<divstyle="display: flex">
239
-
<imgsrc="/assets/images/sota/blog-2022-3-10-using-pytorch-5.png"alt="PyTorch YOLOv5 on Android, example with a dog"width="50%">
240
-
<imgsrc="/assets/images/sota/blog-2022-3-10-using-pytorch-6.png"alt="PyTorch YOLOv5 on Android, example with a horse and a rider"width="50%">
242
+
<imgsrc="{{ site.baseurl }}/assets/images/sota/blog-2022-3-10-using-pytorch-5.png"alt="PyTorch YOLOv5 on Android, example with a dog"width="50%">
243
+
<imgsrc="{{ site.baseurl }}/assets/images/sota/blog-2022-3-10-using-pytorch-6.png"alt="PyTorch YOLOv5 on Android, example with a horse and a rider"width="50%">
241
244
</div>
242
245
**Figure 2**. *PyTorch YOLOv5 on Android*.
243
246
@@ -251,16 +254,16 @@ But if you just need to run some common computer vision models on Jetson Nano us
0 commit comments