You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
4. Build and install PyTorch (Refer to [PyTorch guide](https://github.com/pytorch/pytorch#install-pytorch) for more details)
47
48
```bash
48
49
cd${pytorch_directory}
49
50
python setup.py install
50
51
```
51
52
52
-
### Install Intel PyTorch Extension from Source
53
+
### Install Intel Extension for PyTorch from Source
53
54
Install dependencies
54
55
```bash
55
56
pip install lark-parser hypothesis
56
57
```
57
58
58
59
Install the extension
59
60
```bash
60
-
cd${intel_pytorch_extension_directory}
61
+
cd${intel_extension_for_pytorch_directory}
61
62
python setup.py install
62
63
```
63
64
64
65
## Getting Started
65
66
66
-
The user just needs to convert the model and input tensors to the extension device, then the extension will be enabled automatically. Take an example, the code as follows is a model without the extension.
67
+
If you want to explore Intel Extension for PyTorch, you just need to convert the model and input tensors to the extension device, then the extension will be enabled automatically. Take an example, the code as follows is a model without the extension.
67
68
```python
68
69
import torch
69
70
import torch.nn as nn
@@ -80,13 +81,13 @@ input = torch.randn(2, 4)
80
81
model = Model()
81
82
res = model(input)
82
83
```
83
-
If you want to explore the Intel PyTorch Extension, you just need to transform the above python script as follows.
84
+
You just need to transform the above python script as follows and thenthe extension will be enabled and accelerate the computation automatically.
84
85
```python
85
86
import torch
86
87
import torch.nn as nn
87
88
88
-
# Import Intel PyTorch Extension
89
-
import intel_pytorch_extension
89
+
# Import Extension
90
+
import intel_pytorch_extension as ipex
90
91
91
92
class Model(nn.Module):
92
93
def __init__(self):
@@ -96,20 +97,25 @@ class Model(nn.Module):
96
97
def forward(self, input):
97
98
return self.linear(input)
98
99
99
-
# Convert the input tensor to Intel PyTorch Extension device
100
-
input = torch.randn(2, 4).to('dpcpp')
101
-
# Convert the model to Intel PyTorch Extension device
102
-
model = Model().to('dpcpp')
100
+
# Convert the input tensor to the Extension device
101
+
input = torch.randn(2, 4).to(ipex.DEVICE)
102
+
# Convert the model to the Extension device
103
+
model = Model().to(ipex.DEVICE)
103
104
104
105
res = model(input)
105
106
```
106
-
In addition, Intel PyTorch Extension can auto dispatch an OP to DNNL if the OP is supported with DNNL. Currently, the feature is not enabled by default. If you want to enable the feature, you can refine the above code as follows.
107
+
108
+
### Automatically Mix Precision
109
+
In addition, Intel Extension for PyTorch supports the mixed precision. It means that some operators of a model may run with Float32 and some other operators may run with BFloat16 or INT8.
110
+
In traditional, if you want to run a model with a low precision type, you need to convert the parameters and the input tensors to the low precision type manually. And if the model contains some operators that do not support the low precision type, then you have to convert back to Float32. Round after round until the model can run normally.
111
+
The extension can simply the case, you just need to enable the auto-mix-precision as follows, then you can benefit from the low precision. Currently, the extension only supports BFloat16.
0 commit comments