Skip to content

Commit 509a378

Browse files
tye1jingxu10
andauthored
Update release note and known issues (#3060)
* Add release note and known issues * update README --------- Co-authored-by: Jing Xu <jing.xu@intel.com>
1 parent 369774f commit 509a378

File tree

4 files changed

+63
-47
lines changed

4 files changed

+63
-47
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,10 +31,10 @@ Compilation instruction of the latest CPU code base `master` branch can be found
3131
You can install Intel® Extension for PyTorch\* for GPU via command below.
3232

3333
```bash
34-
python -m pip install torch==1.13.0a0+git6c9b55e intel_extension_for_pytorch==1.13.120+xpu -f https://developer.intel.com/ipex-whl-stable-xpu
34+
python -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel_extension_for_pytorch==2.0.110+xpu -f https://developer.intel.com/ipex-whl-stable-xpu
3535
```
3636

37-
**Note:** The patched PyTorch 1.13.1 is required to work with Intel® Extension for PyTorch\* on Intel® graphics card for now.
37+
**Note:** The patched PyTorch 2.0.1 is required to work with Intel® Extension for PyTorch\* on Intel® graphics card for now.
3838

3939
More installation methods can be found at [GPU Installation Guide](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/installation.html).
4040

docs/tutorials/installations/linux.rst

Lines changed: 0 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -300,17 +300,3 @@ You can get full usage help message by running the run file alone, as the follow
300300
.. code:: shell
301301
302302
bash <libintel-ext-pt-name>.run
303-
304-
Solutions to potential issues on WSL2
305-
-------------------------------------
306-
307-
.. list-table::
308-
:widths: auto
309-
:header-rows: 1
310-
311-
* - Issue
312-
- Explanation
313-
* - Building from source for Intel® Arc™ A-Series GPUs failed on WSL2 without any error thrown
314-
- Your system probably does not have enough RAM, so Linux kernel's Out-of-memory killer got invoked. You can verify it by running `dmesg` on bash (WSL2 terminal). If the OOM killer had indeed killed the build process, then you can try increasing the swap-size of WSL2, and/or decreasing the number of parallel build jobs with the environment variable `MAX_JOBS` (by default, it's equal to the number of logical CPU cores. So, setting `MAX_JOBS` to 1 is a very conservative approach, which would slow things down a lot).
315-
* - On WSL2, some workloads terminate with an error `CL_DEVICE_NOT_FOUND` after some time
316-
- This is due to the `TDR feature <https://learn.microsoft.com/en-us/windows-hardware/drivers/display/tdr-registry-keys#tdrdelay>`_ in Windows. You can try increasing TDRDelay in your Windows Registry to a large value, such as 20 (it is 2 seconds, by default), and reboot.

docs/tutorials/performance_tuning/known_issues.md

Lines changed: 36 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -14,10 +14,8 @@ Known Issues
1414
Intel® Optimization for Horovod\* need use utilities provided by Intel® Extension for PyTorch\*. The improper import order will cause Intel® Extension for PyTorch\* be unloaded before Intel® Optimization for Horovod\* at the end of the execution and trigger this error. The recommended usage is to `import intel_extension_for_pytorch` before `import horovod.torch as hvd`.
1515

1616
- RuntimeError: Number of dpcpp devices should be greater than zero!
17-
18-
- Scenario 1: Running some AI models (e.g. 3D-Unet inference) on Ubuntu22.04 may trigger this runtime error, as oneAPI Base Toolkit 2023.1 fails to return available GPU device on ubuntu22.04 in such scenario. The workaround solution is to update the model script to make sure `import torch` and `import intel_extension_for_pytorch` happen before importing other libraries.
19-
20-
- Scenario 2: If you use Intel® Extension for PyTorch\* in a conda environment, this error might occur. Conda also ships with a libstdc++.so dynamic library file. It may conflict with the one shipped in the OS. Exporting the libstdc++.so file path in OS to an environment variable `LD_PRELOAD` could workaround this issue.
17+
18+
If you use Intel® Extension for PyTorch* in a conda environment, you might encounter this error. Conda also ships with a libstdc++.so dynamic library file that may conflict with the one shipped in the OS. Exporting the libstdc++.so file path in the OS to an environment variable `LD_PRELOAD` could work around this issue.
2119

2220
- symbol undefined caused by `_GLIBCXX_USE_CXX11_ABI`
2321

@@ -27,6 +25,22 @@ Known Issues
2725

2826
DPC++ does not support `_GLIBCXX_USE_CXX11_ABI=0`, Intel® Extension for PyTorch\* is always compiled with `_GLIBCXX_USE_CXX11_ABI=1`. This symbol undefined issue appears when PyTorch\* is compiled with `_GLIBCXX_USE_CXX11_ABI=0`. Pass `export GLIBCXX_USE_CXX11_ABI=1` and compile PyTorch\* with particular compiler which supports `_GLIBCXX_USE_CXX11_ABI=1`. We recommend using prebuilt wheels in [download server](https://developer.intel.com/ipex-whl-stable-xpu) to avoid this issue.
2927

28+
- Bad termination after AI model execution finishes when using Intel MPI
29+
30+
This is a random issue when the AI model (e.g. RN50 training) execution finishes in an Intel MPI environment. It is not user-friendly as the model execution ends ungracefully. The workaround solution is to add `dist.destroy_process_group()` during the cleanup stage in the model script, as described in [Getting Started with Distributed Data Parallel](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html).
31+
32+
- `-997 runtime error` when running some AI models on Intel® Arc™ A-Series GPUs
33+
34+
Some of the `-997 runtime error` are actually out-of-memory errors. As Intel® Arc™ A-Series GPUs have less device memory than Intel® Data Center GPU Flex Series 170 and Intel® Data Center GPU Max Series, running some AI models on them may trigger out-of-memory errors and cause them to report failure such as `-997 runtime error` most likely. This is expected. Memory usage optimization is a work in progress to allow Intel® Arc™ A-Series GPUs to support more AI models.
35+
36+
- Building from source for Intel® Arc™ A-Series GPUs fails on WSL2 without any error thrown
37+
38+
Your system probably does not have enough RAM, so Linux kernel's Out-of-memory killer was invoked. You can verify this by running `dmesg` on bash (WSL2 terminal). If the OOM killer had indeed killed the build process, then you can try increasing the swap-size of WSL2, and/or decreasing the number of parallel build jobs with the environment variable `MAX_JOBS` (by default, it's equal to the number of logical CPU cores. So, setting `MAX_JOBS` to 1 is a very conservative approach that would slow things down a lot).
39+
40+
- Some workloads terminate with an error `CL_DEVICE_NOT_FOUND` after some time on WSL2
41+
42+
This issue is due to the [TDR feature](https://learn.microsoft.com/en-us/windows-hardware/drivers/display/tdr-registry-keys#tdrdelay) in Windows. You can try increasing TDRDelay in your Windows Registry to a large value, such as 20 (it is 2 seconds, by default), and reboot.
43+
3044
### Dependency Libraries
3145

3246
- Can't find oneMKL library when build Intel® Extension for PyTorch\* without oneMKL
@@ -69,40 +83,32 @@ Known Issues
6983

7084
If you continue seeing similar issues for other shared object files, add the corresponding files under `${MKL_DPCPP_ROOT}/lib/intel64/` by `LD_PRELOAD`. Note that the suffix of the libraries may change (e.g. from .1 to .2), if more than one oneMKL library is installed on the system.
7185

72-
- OpenMP library could not be found
73-
74-
Build Intel® Extension for PyTorch\* on SLES15 SP3 using default GCC 7.5 and CentOS8 using default GCC 8.5 may trigger this build error.
75-
76-
```bash
77-
Make Error at third_party/ideep/mkl-dnn/third_party/oneDNN/cmake/OpenMP.cmake:118 (message):
78-
OpenMP library could not be found. Proceeding might lead to highly
79-
sub-optimal performance.
80-
Call Stack (most recent call first):
81-
third_party/ideep/mkl-dnn/third_party/oneDNN/CMakeLists.txt:117 (include)
82-
```
83-
84-
The root cause is GCC 7.5 or 8.5 does not support `-Wno-error=redundant-move` option. Uplift to GCC version >=9 can solve this issue.
85-
86-
### UnitTest
86+
### Unit Test
8787

8888
- Unit test failures on Intel® Data Center GPU Flex Series 170
8989

90-
The following unit tests fail on Intel® Data Center GPU Flex Series 170.
91-
- test_linalg.py::TestTorchMethod::test_tensorinv_empty
92-
- test_distributions.py::TestDistributions::test_dirichlet_mean_var
93-
- test_adaptive_avg_pool2d.py::TestNNMethod::test_adaptive_avg_pool2d
90+
The following unit tests fail on Intel® Data Center GPU Flex Series 170 but the same test cases pass on Intel® Data Center GPU Max Series. The root cause of the failures is under investigation.
9491
- test_multilabel_margin_loss.py::TestNNMethod::test_multiabel_margin_loss
95-
96-
The same test cases pass on Intel® Data Center GPU Max Series. The root cause of the failures is under investigation.
97-
92+
- test_weight_norm.py::TestNNMethod::test_weight_norm_differnt_type
93+
9894
- Unit test failures on Intel® Data Center GPU Max Series
9995

100-
The following unit tests randomly fail on Intel® Data Center GPU Flex Max Series.
96+
The following unit tests randomly fail on Intel® Data Center GPU Max Series if running with other test cases together using `pytest -v`. These cases pass if run individually on the same environment. The root cause of the failures is under investigation.
97+
10198
- test_nn.py::TestNNDeviceTypeXPU::test_activations_bfloat16_xpu
102-
- test_lstm.py::TestNNMethod::test_lstm_rnnt_onednn
10399
- test_eigh.py::TestTorchMethod::test_linalg_eigh
104-
105-
The test cases rarely fail if running with other test cases together using `pytest -v`. These cases pass if run individually on the same environment. The root cause of the failures is under investigation.
100+
- test_baddbmm.py::TestTorchMethod::test_baddbmm_scale
101+
102+
The following unit tests fail on Intel® Data Center GPU Max Series. The root cause of the failures is under investigation with oneDNN as the operators under test use oneDNN primitives.
103+
104+
- test_lstm.py::TestNNMethod::test_lstm_rnnt_onednn
105+
- test_conv_transposed.py::TestTorchMethod::test_deconv3d_bias
106+
107+
- Unit test failures on CPU (ICX, CPX, SPR).
108+
109+
The following unit test fails on CPU if using latest transformers versoin (4.31.0). The workaround solution is to use old version transformers by pip `install transformers==4.30.0` instead.
110+
111+
- test_tpp_ops.py::TPPOPsTester::test_tpp_bert_embeddings
106112

107113
## Known Issues Specific to CPU
108114

docs/tutorials/releases.md

Lines changed: 25 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,31 @@
11
Releases
22
=============
33

4+
## 2.0.110+xpu
5+
6+
Intel® Extension for PyTorch\* v2.0.110+xpu is the new Intel® Extension for PyTorch\* release supports both CPU platforms and GPU platforms (Intel® Data Center GPU Flex Series and Intel® Data Center GPU Max Series) based on PyTorch\* 2.0.1. It extends PyTorch\* 2.0.1 with up-to-date features and optimizations on `xpu` for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch* `xpu` device, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs with PyTorch*.
7+
8+
### Highlights
9+
10+
This release introduces specific XPU solution optimizations on Intel discrete GPUs which include Intel® Data Center GPU Flex Series and Intel® Data Center GPU Max Series. Optimized operators and kernels are implemented and registered through PyTorch\* dispatching mechanism for the `xpu` device. These operators and kernels are accelerated on Intel GPU hardware from the corresponding native vectorization and matrix calculation features. In graph mode, additional operator fusions are supported to reduce operator/kernel invocation overheads, and thus increase performance.
11+
12+
This release provides the following features:
13+
- oneDNN 3.3 API integration and adoption
14+
- Libtorch support
15+
- ARC support on Windows, WSL2 and Ubuntu (Experimental)
16+
- OOB models improvement
17+
- More fusion patterns enabled for optimizing OOB models
18+
- CPU support is merged in this release:
19+
- CPU features and optimizations are equivalent to what has been released in Intel® Extension for PyTorch* v2.0.100+cpu release that was made publicly available in May 2023. For customers who would like to evaluate workloads on both GPU and CPU, they can use this package. For customers who are focusing on CPU only, we still recommend them to use Intel® Extension for PyTorch* v2.0.100+cpu release for smaller footprint, less dependencies and broader OS support.
20+
21+
This release adds the following fusion patterns in PyTorch\* JIT mode for Intel GPU:
22+
- `add` + `softmax`
23+
- `add` + `view` + `softmax`
24+
25+
### Known Issues
26+
27+
Please refer to [Known Issues webpage](./performance_tuning/known_issues.md).
28+
429
## 1.13.120+xpu
530

631
Intel® Extension for PyTorch\* v1.13.120+xpu is the updated Intel® Extension for PyTorch\* release supports both CPU platforms and GPU platforms (Intel® Data Center GPU Flex Series and Intel® Data Center GPU Max Series) based on PyTorch\* 1.13.1. It extends PyTorch\* 1.13.1 with up-to-date features and optimizations on `xpu` for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch* `xpu` device, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs with PyTorch*.
@@ -29,7 +54,6 @@ This release adds the following fusion patterns in PyTorch\* JIT mode for Intel
2954

3055
Please refer to [Known Issues webpage](./performance_tuning/known_issues.md).
3156

32-
3357
## 1.13.10+xpu
3458

3559
Intel® Extension for PyTorch\* v1.13.10+xpu is the first Intel® Extension for PyTorch\* release supports both CPU platforms and GPU platforms (Intel® Data Center GPU Flex Series and Intel® Data Center GPU Max Series) based on PyTorch\* 1.13. It extends PyTorch\* 1.13 with up-to-date features and optimizations on `xpu` for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch* `xpu` device, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs with PyTorch*.

0 commit comments

Comments
 (0)