Skip to content

introduction of intel extension for pytorch #1702

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

jingxu10
Copy link
Contributor

@jingxu10 jingxu10 commented Oct 5, 2021

PR to add introduction of Intel Extension for PyTorch to PyTorch tutorials.

@netlify
Copy link

netlify bot commented Oct 5, 2021

✔️ Deploy Preview for pytorch-tutorials-preview ready!

🔨 Explore the source changes: 7b0dc38

🔍 Inspect the deploy log: https://app.netlify.com/sites/pytorch-tutorials-preview/deploys/615e9072f7792800074087b3

😎 Browse the preview: https://deploy-preview-1702--pytorch-tutorials-preview.netlify.app/recipes/recipes/intel_extension_for_pytorch

@malfet malfet requested a review from chauhang October 5, 2021 16:54
@Jianhui-Li
Copy link

@vitaly-fedyunin @gottbrath This is the IPEX tutorial.

Copy link
Contributor

@malfet malfet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two minor typos

@yqhu
Copy link

yqhu commented Oct 6, 2021

Tested the inference examples (Imperative and Script modes; Float32 and BFloat16) with IPEX built from master and model = ipex.optimize(model, ...) triggered the assertion on https://github.com/intel/intel-extension-for-pytorch/blob/master/torch_ipex_py/utils.py#L141

Traceback (most recent call last):
  File "inf_fp16.py", line 17, in 
    model = ipex.optimize(model, dtype=torch.bfloat16)
  File "/home/ubuntu/anaconda3/envs/ipex/lib/python3.8/site-packages/intel_extension_for_pytorch/utils.py", line 141, in optimize
    assert optimizer is not None, "The optimizer should be given for training mode"
AssertionError: The optimizer should be given for training mode

Was curious why model.training would be true for inference. Or did I build from the wrong branch of IPEX?

The build process generated libintel_pex.so so I had to change libintel-ext-pt-cpu.so to libintel_pex.so in CMakeLists.txt for the C++ example to compile. Again, it could be me using the wrong branch?

@jingxu10
Copy link
Contributor Author

jingxu10 commented Oct 6, 2021

Hi @yqhu , model.eval() is required if it is an inference workload. Since we just would like to show code changes to the original pytorch code, I didn't show full-functioning code snippets. Please let me know if you prefer to having full code in the tutorial.
We were under discussion about the c++ dynamic library name recently. The library name in the current master branch will be changed to libintel-ext-pt-cpu.so later.

@yqhu
Copy link

yqhu commented Oct 7, 2021

model.eval() won't fix it though I understand it's still under development. Thanks.

@jingxu10
Copy link
Contributor Author

jingxu10 commented Oct 7, 2021

Hi @yqhu , I fixed the AssertionError: The optimizer should be given for training mode error in the latest commit. Pls have a try.

Copy link

@yqhu yqhu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @jingxu10.

@gottbrath
Copy link
Contributor

Just wanted to chime in here and say that I think there is value in having this tutorial be part of the PyTorch tutorial set.

@malfet malfet merged commit f6d75f8 into pytorch:master Oct 8, 2021
@jingxu10 jingxu10 deleted the jingxu10/intel_extension_for_pytorch branch October 8, 2021 00:48
rodrigo-techera pushed a commit to Experience-Monks/tutorials that referenced this pull request Nov 29, 2021
* init for recipe for intel extension for pytorch

* update intel_extension_for_pytorch.py

* update intel_extension_for_pytorch.py for c++ part

* update c++ so file name

* Fix typos

* fixed issue for inference sample codes

Co-authored-by: michaelhsu <michaelhsu170@gmail.com>
Co-authored-by: Nikita Shulga <nikita.shulga@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants