Skip to content

pt mobile script and optimize recipe #1193

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Oct 25, 2020

Conversation

jeffxtang
Copy link
Contributor

SCRIPT AND OPTIMIZE FOR MOBILE RECIPE for PyTorch Mobile

prepare_save(model_fused, True)


The outputs of the original model and its fused version will be:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to read this line twice - when I see "outputs of the model", I think of predictions, but this is referring to the outputs of the print() statements above. For clarity, I'd move the two print statements to the end of the code section above, and change "outputs" to "structures" or something similar.


import torchvision

model_quantized = torchvision.models.quantization.mobilenet_v2(pretrained=True, quantize=True)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it make sense to demonstrate post-training quantization as a separate step? This would be valuable in the case that the user's existing model is not already quantized.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it does but the Quantization recipe covers the info in details.

import torch

dummy_input = torch.rand(1, 3, 224, 224)
torchscript_model = torch.jit.trace(model_quantized, dummy_input)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good to warn the user that torch.jit.trace() only scripts the code path executed during the trace, so it will not work properly for models that include decision branches.


import torchvision

model_quantized = torchvision.models.quantization.mobilenet_v2(pretrained=True, quantize=True)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I mentioned this separately in the Android recipe) Does it make sense to demonstrate post-training quantization as a separate step? This would be valuable in the case that the user's existing model is not already quantized.

import torch

dummy_input = torch.rand(1, 3, 224, 224)
torchscript_model = torch.jit.trace(model_quantized, dummy_input)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I mentioned this separately in the Android recipe) It would be good to warn the user that torch.jit.trace() only scripts the code path executed during the trace, so it will not work properly for models that include decision branches.


* Run `pod install` from the Terminal and then open your project's xcworkspace file;

* Drag and drop the two files `TorchModule.h` and `TorchModule.mm` to your project. If your project is Swift based, a message box with the title "Would you like to configure an Objective-C bridging header?" will show up; click the "Create Bridging Header" button to create a Swift to Objective-c bridging header file, and add `#import "TorchModule.h"` to the header file `<your_project_name>-Bridging-Header.h`;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this flow change significantly for the case where the user is working with an existing project that already contains a bridging header?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No except that they'd not see the prompt (if there's already a bridging header file). I plan to cover in a future demo app the case with an existing project.

@brianjo brianjo merged commit ae06375 into pytorch:1.7-release Oct 25, 2020
brianjo added a commit that referenced this pull request Oct 27, 2020
* [iOS][GPU] Add iOS GPU workflow (#1200)

* pt mobile script and optimize recipe (#1193)

* pt mobile script and optimize recipe

* 1 pt mobile new recipes summary and 5 recipes

* updated recipes_index.rst

* thumbnail png fix for ios recipe in recipes_index.rst

* edits based on feedback

* Updating 1.7 branch (#1205)

* Update event tracking (#1188)

* Update beginner_source/audio_preprocessing_tutorial.py (#1199)

* Typo in beginner_source/audio_preprocessing_tutorial.py

Typo in beginner_source/audio_preprocessing_tutorial.py

fron > from

* update title.

* fix file access.

Co-authored-by: JuHyuk Park <creduo@gmail.com>

* Update audio_preprocessing_tutorial.py (#1202)

Adds a comment for running this tutorial in Google Colab.

Co-authored-by: Pat Mellon <16585245+patmellon@users.noreply.github.com>
Co-authored-by: Vincent QB <vincentqb@users.noreply.github.com>
Co-authored-by: JuHyuk Park <creduo@gmail.com>

Co-authored-by: Tao Xu <taox@fb.com>
Co-authored-by: Jeff Tang <jeffxtang@fb.com>
Co-authored-by: Pat Mellon <16585245+patmellon@users.noreply.github.com>
Co-authored-by: Vincent QB <vincentqb@users.noreply.github.com>
Co-authored-by: JuHyuk Park <creduo@gmail.com>
rodrigo-techera pushed a commit to Experience-Monks/tutorials that referenced this pull request Nov 29, 2021
* [iOS][GPU] Add iOS GPU workflow (pytorch#1200)

* pt mobile script and optimize recipe (pytorch#1193)

* pt mobile script and optimize recipe

* 1 pt mobile new recipes summary and 5 recipes

* updated recipes_index.rst

* thumbnail png fix for ios recipe in recipes_index.rst

* edits based on feedback

* Updating 1.7 branch (pytorch#1205)

* Update event tracking (pytorch#1188)

* Update beginner_source/audio_preprocessing_tutorial.py (pytorch#1199)

* Typo in beginner_source/audio_preprocessing_tutorial.py

Typo in beginner_source/audio_preprocessing_tutorial.py

fron > from

* update title.

* fix file access.

Co-authored-by: JuHyuk Park <creduo@gmail.com>

* Update audio_preprocessing_tutorial.py (pytorch#1202)

Adds a comment for running this tutorial in Google Colab.

Co-authored-by: Pat Mellon <16585245+patmellon@users.noreply.github.com>
Co-authored-by: Vincent QB <vincentqb@users.noreply.github.com>
Co-authored-by: JuHyuk Park <creduo@gmail.com>

Co-authored-by: Tao Xu <taox@fb.com>
Co-authored-by: Jeff Tang <jeffxtang@fb.com>
Co-authored-by: Pat Mellon <16585245+patmellon@users.noreply.github.com>
Co-authored-by: Vincent QB <vincentqb@users.noreply.github.com>
Co-authored-by: JuHyuk Park <creduo@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants