Skip to content

Commit eebc4d3

Browse files
author
Anna Spiridonenkova
committed
updating src sources, link formatting
1 parent 6e7f59e commit eebc4d3

File tree

1 file changed

+18
-15
lines changed

1 file changed

+18
-15
lines changed

_posts/2022-3-10-running-pytorch-models-on-jetson-nano.md

Lines changed: 18 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
layout: blog_detail
33
title: 'Running PyTorch Models on Jetson Nano'
4-
author: Team PyTorch
4+
author: Jeff Tang, Hamid Shojanazeri, Geeta Chauhan
55
featured-img: ''
66
---
77

@@ -129,7 +129,10 @@ Although Jetson Inference includes models already converted to the TensorRT engi
129129
### Using TensorRT
130130
[TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/) is a high-performance inference framework from Nvidia. Jetson Nano supports TensorRT via the Jetpack SDK, included in the SD Card image used to set up Jetson Nano. To confirm that TensorRT is already installed in Nano, `run dpkg -l|grep -i tensorrt`:
131131

132-
<img src="/assets/images/blog-2022-3-10-using-tensorrt.png" width="60%">
132+
133+
<div class="text-center">
134+
<img src="{{ site.baseurl }}/assets/images/blog-2022-3-10-using-tensorrt.png" width="80%">
135+
</div>
133136

134137
Theoretically, TensorRT can be used to “take a trained PyTorch model and optimize it to run more efficiently during inference on an NVIDIA GPU.” Follow the instructions and code in the [notebook](https://github.com/NVIDIA/TensorRT/blob/master/quickstart/IntroNotebooks/4.%20Using%20PyTorch%20through%20ONNX.ipynb) to see how to use PyTorch with TensorRT through ONNX on a torchvision Resnet50 model:
135138

@@ -179,15 +182,15 @@ You can also use the docker image described in the section *Using Jetson Inferen
179182

180183
The official [YOLOv5](https://github.com/ultralytics/yolov5) repo is used to run the PyTorch YOLOv5 model on Jetson Nano. After logging in to Jetson Nano, follow the steps below:
181184

182-
1. Get the repo and install what’s required:
185+
1. Get the repo and install what’s required:
183186

184187
```
185188
git clone https://github.com/ultralytics/yolov5
186189
cd yolov5
187190
pip install -r requirements.txt
188191
```
189192

190-
2. Run `python3 detect.py`, which by default uses the PyTorch yolov5s.pt model. You should see something like:
193+
2. Run `python3 detect.py`, which by default uses the PyTorch yolov5s.pt model. You should see something like:
191194

192195
```
193196
detect: weights=yolov5s.pt, source=data/images, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False
@@ -224,20 +227,20 @@ total 1456
224227
Using the same test files used in the PyTorch iOS YOLOv5 demo app or Android YOLOv5 demo app, you can compare the results generated with running the YOLOv5 PyTorch model on mobile devices and Jetson Nano:
225228

226229
<div style="display: flex">
227-
<img src="/assets/images/sota/blog-2022-3-10-using-pytorch-1.png" alt="PyTorch YOLOv5 on Jetson Nano, example with a dog" width="50%">
228-
<img src="/assets/images/sota/blog-2022-3-10-using-pytorch-2.png" alt="PyTorch YOLOv5 on Jetson Nano, example with a horse and a rider" width="50%">
230+
<img src="{{ site.baseurl }}/assets/images/sota/blog-2022-3-10-using-pytorch-1.png" alt="PyTorch YOLOv5 on Jetson Nano, example with a dog" width="50%">
231+
<img src="{{ site.baseurl }}/assets/images/sota/blog-2022-3-10-using-pytorch-2.png" alt="PyTorch YOLOv5 on Jetson Nano, example with a horse and a rider" width="50%">
229232
</div>
230233
**Figure 1**. *PyTorch YOLOv5 on Jetson Nano*.
231234

232235
<div style="display: flex">
233-
<img src="/assets/images/sota/blog-2022-3-10-using-pytorch-3.png" alt="PyTorch YOLOv5 on iOS, example with a dog" width="50%">
234-
<img src="/assets/images/sota/blog-2022-3-10-using-pytorch-4.png" alt="PyTorch YOLOv5 on iOS, example with a horse and a rider" width="50%">
236+
<img src="{{ site.baseurl }}/assets/images/sota/blog-2022-3-10-using-pytorch-3.png" alt="PyTorch YOLOv5 on iOS, example with a dog" width="50%">
237+
<img src="{{ site.baseurl }}/assets/images/sota/blog-2022-3-10-using-pytorch-4.png" alt="PyTorch YOLOv5 on iOS, example with a horse and a rider" width="50%">
235238
</div>
236239
**Figure 2**. *PyTorch YOLOv5 on iOS*.
237240

238241
<div style="display: flex">
239-
<img src="/assets/images/sota/blog-2022-3-10-using-pytorch-5.png" alt="PyTorch YOLOv5 on Android, example with a dog" width="50%">
240-
<img src="/assets/images/sota/blog-2022-3-10-using-pytorch-6.png" alt="PyTorch YOLOv5 on Android, example with a horse and a rider" width="50%">
242+
<img src="{{ site.baseurl }}/assets/images/sota/blog-2022-3-10-using-pytorch-5.png" alt="PyTorch YOLOv5 on Android, example with a dog" width="50%">
243+
<img src="{{ site.baseurl }}/assets/images/sota/blog-2022-3-10-using-pytorch-6.png" alt="PyTorch YOLOv5 on Android, example with a horse and a rider" width="50%">
241244
</div>
242245
**Figure 2**. *PyTorch YOLOv5 on Android*.
243246

@@ -251,16 +254,16 @@ But if you just need to run some common computer vision models on Jetson Nano us
251254

252255
### References
253256
Jetson Inference docker image details
254-
https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-docker.md
257+
[https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-docker.md](https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-docker.md)
255258

256259
A guide to using TensorRT on the Nvidia Jetson Nano
257-
https://docs.donkeycar.com/guide/robot_sbc/tensorrt_jetson_nano/
260+
[https://docs.donkeycar.com/guide/robot_sbc/tensorrt_jetson_nano/](https://docs.donkeycar.com/guide/robot_sbc/tensorrt_jetson_nano/)
258261
including:
259262

260263
1. Use Jetson as a portable GPU device to run a NN chess engine model
261-
https://medium.com/@ezchess/jetson-lc0-running-leela-chess-zero-on-nvidia-jetson-a-portable-gpu-device-a213afc9c018
264+
[https://medium.com/@ezchess/jetson-lc0-running-leela-chess-zero-on-nvidia-jetson-a-portable-gpu-device-a213afc9c018](https://medium.com/@ezchess/jetson-lc0-running-leela-chess-zero-on-nvidia-jetson-a-portable-gpu-device-a213afc9c018)
262265

263266
2. A MaskEraser app using PyTorch and Torchvision, installed directly with pip
264-
https://github.com/INTEC-ATI/MaskEraser#install-pytorch
267+
[https://github.com/INTEC-ATI/MaskEraser#install-pytorch](https://github.com/INTEC-ATI/MaskEraser#install-pytorch)
265268

266-
A PyTorch to TensorRT converter https://github.com/NVIDIA-AI-IOT/torch2trt
269+
A PyTorch to TensorRT converter [https://github.com/NVIDIA-AI-IOT/torch2trt](https://github.com/NVIDIA-AI-IOT/torch2trt)

0 commit comments

Comments
 (0)