You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _mobile/ios.md
+8-3Lines changed: 8 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -16,6 +16,11 @@ To get started with PyTorch on iOS, we recommend exploring the following [HelloW
16
16
17
17
HelloWorld is a simple image classification application that demonstrates how to use PyTorch C++ libraries on iOS. The code is written in Swift and uses Objective-C as a bridge.
18
18
19
+
### Requirements
20
+
21
+
- XCode 11.0 or above
22
+
- iOS 12.0 or above
23
+
19
24
### Model Preparation
20
25
21
26
Let's start with model preparation. If you are familiar with PyTorch, you probably should already know how to train and save your model. In case you don't, we are going to use a pre-trained image classification model - [MobileNet v2](https://pytorch.org/hub/pytorch_vision_mobilenet_v2/), which is already packaged in [TorchVision](https://pytorch.org/docs/stable/torchvision/index.html). To install it, run the command below.
@@ -32,7 +37,7 @@ Once we have TorchVision installed successfully, let's navigate to the HelloWorl
32
37
python trace_model.py
33
38
```
34
39
35
-
If everything works well, we should have our model - `model.pt`generated in the `HelloWorld` folder. Now copy the model file to our application folder `HelloWorld/model`.
40
+
If everything works well, `model.pt`should be generated and saved in the `HelloWorld/HelloWorld/model` folder.
36
41
37
42
> To find out more details about TorchScript, please visit [tutorials on pytorch.org](https://pytorch.org/tutorials/advanced/cpp_export.html)
We first load the image from our bundle and resize it to 224x224. Then we call this `normalized()` category method to normalized the pixel buffer. Let's take a closer look at the code below.
73
+
We first load the image from our bundle and resize it to 224x224. Then we call this `normalized()` category method to normalize the pixel buffer. Let's take a closer look at the code below.
69
74
70
75
```swift
71
76
var normalizedBuffer: [Float32] = [Float32](repeating: 0, count: w * h *3)
@@ -82,7 +87,7 @@ The code might look weird at first glance, but it’ll make sense once we unders
82
87
83
88
#### TorchScript Module
84
89
85
-
Now that we have preprocessed our input data and we have a pre-trained TorchScript model, the next step is to use them to run predication. To do that, we'll first load our model into the application.
90
+
Now that we have preprocessed our input data and we have a pre-trained TorchScript model, the next step is to use them to run prediction. To do that, we'll first load our model into the application.
0 commit comments