Skip to content

Commit a299e79

Browse files
[tutorial][vulkan] Vulkan user workflow tutorial (#1187)
Co-authored-by: Brian Johnson <brianjo@fb.com>
1 parent 33d9583 commit a299e79

File tree

2 files changed

+247
-0
lines changed

2 files changed

+247
-0
lines changed

prototype_source/README.txt

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,3 +19,7 @@ Prototype Tutorials
1919
5. torchscript_freezing.py
2020
Model Freezing in TorchScript
2121
https://github.com/pytorch/tutorials/blob/master/prototype_source/torchscript_freezing.py
22+
23+
6. vulkan_workflow.rst
24+
Vulkan Backend User Workflow
25+
https://pytorch.org/tutorials/intermediate/vulkan_workflow.html

prototype_source/vulkan_workflow.rst

Lines changed: 243 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,243 @@
1+
PyTorch Vulkan Backend User Workflow
2+
====================================
3+
4+
**Author**: `Ivan Kobzarev <https://github.com/IvanKobzarev>`_
5+
6+
Introduction
7+
------------
8+
PyTorch 1.7 supports the ability to run model inference on GPUs that support the Vulkan graphics and compute API. The primary target devices are mobile GPUs on Android devices. The Vulkan backend can also be used on Linux, Mac, and Windows desktop builds to use Vulkan devices like Intel integrated GPUs. This feature is in the prototype stage and is subject to change.
9+
10+
Building PyTorch with Vulkan backend
11+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
12+
Vulkan backend is not included by default. The main switch to include Vulkan backend is cmake option ``USE_VULKAN``, that can be set by environment variable ``USE_VULKAN``.
13+
14+
To use PyTorch with Vulkan backend, we need to build it from source with additional settings. Checkout the PyTorch source code from GitHub master branch.
15+
16+
Optional usage of vulkan wrapper
17+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
18+
19+
By default, Vulkan library will be loaded at runtime using the vulkan_wrapper library. If you specify the environment variable ``USE_VULKAN_WRAPPER=0`` libvulkan will be linked directly.
20+
21+
Desktop build
22+
^^^^^^^^^^^^^
23+
24+
Vulkan SDK
25+
^^^^^^^^^^
26+
Download VulkanSDK from https://vulkan.lunarg.com/sdk/home and set environment variable ``VULKAN_SDK``
27+
28+
Unpack VulkanSDK to ``VULKAN_SDK_ROOT`` folder, install VulkanSDK following VulkanSDK instructions for your system.
29+
30+
For Mac:
31+
32+
::
33+
34+
cd $VULKAN_SDK_ROOT
35+
source setup-env.sh
36+
sudo python install_vulkan.py
37+
38+
39+
Building PyTorch:
40+
41+
For Linux:
42+
43+
::
44+
45+
cd PYTORCH_ROOT
46+
USE_VULKAN=1 USE_VULKAN_SHADERC_RUNTIME=1 USE_VULKAN_WRAPPER=0 python setup.py install
47+
48+
For Mac:
49+
50+
::
51+
52+
cd PYTORCH_ROOT
53+
USE_VULKAN=1 USE_VULKAN_SHADERC_RUNTIME=1 USE_VULKAN_WRAPPER=0 MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install
54+
55+
After successful build, open another terminal and verify the version of installed PyTorch.
56+
57+
::
58+
59+
import torch
60+
print(torch.__version__)
61+
62+
At the time of writing of this recipe, the version is 1.8.0a0+41237a4. You might be seeing different numbers depending on when you check out the code from master, but it should be greater than 1.7.0.
63+
64+
65+
Android build
66+
^^^^^^^^^^^^^
67+
68+
To build LibTorch for android with Vulkan backend for specified ``ANDROID_ABI``.
69+
70+
::
71+
72+
cd PYTORCH_ROOT
73+
ANDROID_ABI=arm64-v8a USE_VULKAN=1 sh ./scripts/build_android.sh
74+
75+
76+
To prepare pytorch_android aars that you can use directly in your app:
77+
78+
::
79+
80+
cd $PYTORCH_ROOT
81+
USE_VULKAN=1 sh ./scripts/build_pytorch_android.sh
82+
83+
84+
Model preparation
85+
-----------------
86+
87+
Install torchvision, get the default pretrained float model.
88+
89+
::
90+
91+
pip install torchvision
92+
93+
Python script to save pretrained mobilenet_v2 to a file:
94+
95+
::
96+
97+
import torch
98+
import torchvision
99+
100+
model = torchvision.models.mobilenet_v2(pretrained=True)
101+
model.eval()
102+
script_model = torch.jit.script(model)
103+
torch.jit.save(script_model, “mobilenet2.pt”)
104+
105+
PyTorch 1.7 Vulkan backend supports only float 32bit operators. The default model needs additional step that will optimize operators fusing
106+
107+
::
108+
109+
from torch.utils.mobile_optimizer import optimize_for_mobile
110+
script_model_vulkan = optimize_for_mobile(script_model, backend='Vulkan')
111+
torch.jit.save(script_model_vulkan, "mobilenet2-vulkan.pt")
112+
113+
The result model can be used only on Vulkan backend as it contains specific to the Vulkan backend operators.
114+
115+
Using Vulkan backend in code
116+
----------------------------
117+
118+
C++ API
119+
-------
120+
121+
::
122+
123+
at::is_vulkan_available()
124+
auto tensor = at::rand({1, 2, 2, 3}, at::device(at::kCPU).dtype(at::kFloat));
125+
auto tensor_vulkan = t.vulkan();
126+
auto module = torch::jit::load(“$PATH”);
127+
auto tensor_output_vulkan = module.forward(inputs)
128+
auto tensor_output = tensor_output.cpu()
129+
130+
``at::is_vulkan_available()`` function tries to initialize Vulkan backend and if Vulkan device is successfully found and context is created - it will return true, false otherwise.
131+
132+
``.vulkan()`` function called on Tensor will copy tensor to Vulkan device, and for operators called with this tensor as input - the operator will run on Vulkan device, and its output will be on the Vulkan device.
133+
134+
``.cpu()`` function called on Vulkan tensor will copy its data to CPU tensor (default)
135+
136+
Operators called with a tensor on a Vulkan device as an input will be executed on a Vulkan device. If an operator is not supported for the Vulkan backend the exception will be thrown.
137+
138+
List of supported operators:
139+
140+
::
141+
142+
_adaptive_avg_pool2d
143+
_cat
144+
add.Scalar
145+
add.Tensor
146+
add_.Tensor
147+
addmm
148+
avg_pool2d
149+
clamp
150+
convolution
151+
empty.memory_format
152+
empty_strided
153+
hardtanh_
154+
max_pool2d
155+
mean.dim
156+
mm
157+
mul.Scalar
158+
relu_
159+
reshape
160+
select.int
161+
slice.Tensor
162+
transpose.int
163+
transpose_
164+
unsqueeze
165+
upsample_nearest2d
166+
view
167+
168+
Those operators allow to use torchvision models for image classification on Vulkan backend.
169+
170+
171+
Python API
172+
----------
173+
174+
``torch.is_vulkan_available()`` is exposed to Python API.
175+
176+
``tensor.to(device='vulkan')`` works as ``.vulkan()`` moving tensor to the Vulkan device.
177+
178+
``.vulkan()`` at the moment of writing of this tutorial is not exposed to Python API, but it is planned to be there.
179+
180+
Android Java API
181+
---------------
182+
183+
For Android API to run model on Vulkan backend we have to specify this during model loading:
184+
185+
::
186+
187+
import org.pytorch.Device;
188+
Module module = Module.load(“$PATH”, Device.VULKAN)
189+
FloatBuffer buffer = Tensor.allocateFloatBuffer(1 * 3 * 224 * 224);
190+
Tensor inputTensor = Tensor.fromBlob(buffer, new int[]{1, 3, 224, 224});
191+
Tensor outputTensor = mModule.forward(IValue.from(inputTensor)).toTensor();
192+
193+
In this case, all inputs will be transparently copied from CPU to the Vulkan device, and model will be run on Vulkan device, the output will be copied transparently to CPU.
194+
195+
The example of using Vulkan backend can be found in test application within the PyTorch repository:
196+
https://github.com/pytorch/pytorch/blob/master/android/test_app/app/src/main/java/org/pytorch/testapp/MainActivity.java#L133
197+
198+
Building android test app with Vulkan
199+
-------------------------------------
200+
201+
1. Build pytorch android with Vulkan backend for all android ABIs
202+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
203+
204+
::
205+
206+
cd $PYTORCH_ROOT
207+
USE_VULKAN=1 sh ./scripts/build_pytorch_android.sh
208+
209+
Or if you need only specific abi you can set it as an argument:
210+
211+
::
212+
213+
cd $PYTORCH_ROOT
214+
USE_VULKAN=1 sh ./scripts/build_pytorch_android.sh $ANDROID_ABI
215+
216+
2. Add vulkan model to test application assets
217+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
218+
219+
Add prepared model ``mobilenet2-vulkan.pt`` to test applocation assets:
220+
221+
::
222+
223+
cp mobilenet2-vulkan.pt $PYTORCH_ROOT/android/test_app/app/src/main/assets/
224+
225+
226+
3. Build and Install test applocation to connected android device
227+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
228+
229+
::
230+
231+
cd $PYTORCH_ROOT
232+
gradle -p android test_app:installMbvulkanLocalBaseDebug
233+
234+
After successful installation, the application with the name 'MBQ' can be launched on the device.
235+
236+
237+
238+
239+
240+
Testing models without uploading to android device
241+
--------------------------------------------------
242+
243+
Software implementations of Vulkan (e.g. https://swiftshader.googlesource.com/SwiftShader ) can be used to test if a model can be run using PyTorch Vulkan Backend (e.g. check if all model operators are supported).

0 commit comments

Comments
 (0)