Skip to content

Commit 369774f

Browse files
jingxu10tye1
andauthored
update docs for 2.0.110+xpu release (#2967)
change installation guide to rst update compile_bundle.sh and ipex version number gen change installation guide to index doc: review edits to examples documentation (#3016) Signed-off-by: David B. Kinder <david.b.kinder@intel.com> Update examples.md typo (#3017) Migrate cheat sheet from IDZ to github (#3024) * migrate cheat sheet * Update index.rst add footer for cache and privacy policy update cheat sheet Add more supported optimizers add scripts for access metrics collection DDP doc refinement add installation guide files back Update known_issues.md Update getting_started.md Emphasize IPEX import order * Correct Conda command --------- Co-authored-by: Ye Ting <ting.ye@intel.com>
1 parent b102fc2 commit 369774f

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

43 files changed

+1040
-352
lines changed

docs/_templates/footer.html

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
{% extends '!footer.html' %} {% block extrafooter %} {{super}}
2+
<p></p><div><a href='https://www.intel.com/content/www/us/en/privacy/intel-cookie-notice.html' data-cookie-notice='true'>Cookies</a> <a href='https://www.intel.com/content/www/us/en/privacy/intel-privacy-notice.html'>| Privacy</a></div>
3+
{% endblock %}

docs/_templates/layout.html

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
{%- extends "!layout.html" %}
2+
{% block scripts %}
3+
<script type="text/javascript">
4+
// Configure TMS settings
5+
window.wapProfile = 'profile-microsite'; // This is mapped by WAP authorize value
6+
window.wapLocalCode = 'us-en'; // Dynamically set per localized site, see mapping table for values
7+
window.wapSection = “intel-extension-for-pytorch”; // WAP team will give you a unique section for your site
8+
window.wapEnv = 'prod'; // environment to be use in Adobe Tags.
9+
// Load TMS
10+
(() => {
11+
let url = 'https://www.intel.com/content/dam/www/global/wap/main/wap-microsite.js';
12+
let po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = url;
13+
let s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s);
14+
}) ();
15+
</script>
16+
{% endblock %}

docs/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,7 @@ Intel® Extension for PyTorch* has been released as an open–source project at
3131
:maxdepth: 1
3232

3333
tutorials/getting_started
34+
tutorials/cheat_sheet
3435
tutorials/features
3536
tutorials/releases
3637
tutorials/installation

docs/tutorials/blogs_publications.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
Blogs & Publications
22
====================
33

4+
* [Accelerate Llama 2 with Intel AI Hardware and Software Optimizations, Jul 2023](https://www.intel.com/content/www/us/en/developer/articles/news/llama2.html)
5+
* [Accelerate PyTorch\* Training and Inference Performance using Intel® AMX, Jul 2023](https://www.intel.com/content/www/us/en/developer/articles/technical/accelerate-pytorch-training-inference-on-amx.html)
46
* [Intel® Deep Learning Boost (Intel® DL Boost) - Improve Inference Performance of Hugging Face BERT Base Model in Google Cloud Platform (GCP) Technology Guide, Apr 2023](https://networkbuilders.intel.com/solutionslibrary/intel-deep-learning-boost-intel-dl-boost-improve-inference-performance-of-hugging-face-bert-base-model-in-google-cloud-platform-gcp-technology-guide)
57
* [Get Started with Intel® Extension for PyTorch\* on GPU | Intel Software, Mar 2023](https://www.youtube.com/watch?v=Id-rE2Q7xZ0&t=1s)
68
* [Accelerate PyTorch\* INT8 Inference with New “X86” Quantization Backend on X86 CPUs, Mar 2023](https://www.intel.com/content/www/us/en/developer/articles/technical/accelerate-pytorch-int8-inf-with-new-x86-backend.html)

docs/tutorials/cheat_sheet.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
Cheat Sheet
2+
===========
3+
4+
Get started with Intel® Extension for PyTorch\* using the following commands:
5+
6+
|Description | Command |
7+
| -------- | ------- |
8+
| Basic CPU Installation | `python -m pip install intel_extension_for_pytorch` |
9+
| Basic GPU Installation | `pip install torch==<version> -f https://developer.intel.com/ipex-whl-stable-xpu`<br>`pip install intel_extension_for_pytorch==<version> -f https://developer.intel.com/ipex-whl-stable-xpu`|
10+
| Import Intel® Extension for PyTorch\* | `import intel_extension_for_pytorch as ipex`|
11+
| Capture a Verbose Log (Command Prompt) | `export ONEDNN_VERBOSE=1` |
12+
| Optimization During Training | `model = ...`<br>`optimizer = ...`<br>`model.train()`<br>`model, optimizer = ipex.optimize(model, optimizer=optimizer)`|
13+
| Optimization During Inference | `model = ...`<br>`model.eval()`<br>`model = ipex.optimize(model)` |
14+
| Optimization Using the Low-Precision Data Type bfloat16 <br>During Training (Default FP32) | `model = ...`<br>`optimizer = ...`<br>`model.train()`<br/><br/>`model, optimizer = ipex.optimize(model, optimizer=optimizer, dtype=torch.bfloat16)`<br/><br/>`with torch.no_grad():`<br>` with torch.cpu.amp.autocast():`<br>` model(data)` |
15+
| Optimization Using the Low-Precision Data Type bfloat16 <br>During Inference (Default FP32) | `model = ...`<br>`model.eval()`<br/><br/>`model = ipex.optimize(model, dtype=torch.bfloat16)`<br/><br/>`with torch.cpu.amp.autocast():`<br>` model(data)`
16+
| [Experimental] Fast BERT Optimization | `from transformers import BertModel`<br>`model = BertModel.from_pretrained("bert-base-uncased")`<br>`model.eval()`<br/><br/>`model = ipex.fast_bert(model, dtype=torch.bfloat16)`|
17+
| Run CPU Launch Script (Command Prompt): <br>Automate Configuration Settings for Performance | `ipexrun [knobs] <your_pytorch_script> [args]`|
18+
| [Experimental] Run HyperTune to perform hyperparameter/execution configuration search | `python -m intel_extension_for_pytorch.cpu.hypertune --conf-file <your_conf_file> <your_python_script> [args]`|
19+
| [Experimental] Enable Graph capture | `model = …`<br>`model.eval()`<br>`model = ipex.optimize(model, graph_mode=True)`|
20+
| Post-Training INT8 Quantization (Static) | `model = …`<br>`model.eval()`<br>`data = …`<br/><br/>`qconfig = ipex.quantization.default_static_qconfig`<br/><br/>`prepared_model = ipex.quantization.prepare(model, qconfig, example_inputs=data, anyplace=False)`<br/><br/>`for d in calibration_data_loader():`<br>` prepared_model(d)`<br/><br/>`converted_model = ipex.quantization.convert(prepared_model)`|
21+
| Post-Training INT8 Quantization (Dynamic) | `model = …`<br>`model.eval()`<br>`data = …`<br/><br/>`qconfig = ipex.quantization.default_dynamic_qconfig`<br/><br/>`prepared_model = ipex.quantization.prepare(model, qconfig, example_inputs=data)`<br/><br/>`converted_model = ipex.quantization.convert(prepared_model)` |
22+
| [Experimental] Post-Training INT8 Quantization (Tuning Recipe): | `model = …`<br>`model.eval()`<br>`data = …`<br/><br/>`qconfig = ipex.quantization.default_static_qconfig`<br/><br/>`prepared_model = ipex.quantization.prepare(model, qconfig, example_inputs=data, inplace=False)`<br/><br/>`tuned_model = ipex.quantization.autotune(prepared_model, calibration_data_loader, eval_function, sampling_sizes=[100],`<br>` accuracy_criterion={'relative': .01}, tuning_time=0)`<br/><br/>`convert_model = ipex.quantization.convert(tuned_model)`|

docs/tutorials/examples.md

Lines changed: 146 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,26 @@
11
Examples
22
========
33

4-
**Note:** For examples on CPU, please check [here](../../../cpu/latest/tutorials/examples.html).
4+
These examples will help you get started using Intel® Extension for PyTorch\*
5+
with Intel GPUs.
56

6-
## Training
7+
**Note:** For examples on Intel CPUs, check these [CPU examples](../../../cpu/latest/tutorials/examples.html).
78

8-
### Single-instance Training
9+
**Note:** You need to install torchvision and transformers to run with the examples.
910

10-
#### Code Changes Highlight
11+
## Python
1112

12-
There are only a few lines of code change required to use Intel® Extension for PyTorch\* on training, as shown:
13-
1. `ipex.optimize` function applies optimizations against the model object, as well as an optimizer object.
14-
2. Use Auto Mixed Precision (AMP) with BFloat16 data type.
15-
3. Convert input tensors, loss criterion and model to XPU.
13+
### Training
14+
15+
#### Single-Instance Training
16+
17+
##### Code Changes Highlight
18+
19+
You'll only need to change a few lines of codes use Intel® Extension for PyTorch\* on training, as shown:
20+
21+
1. Use the `ipex.optimize` function, which applies optimizations against the model object, as well as an optimizer object.
22+
2. Use Auto Mixed Precision (AMP) with BFloat16 data type.
23+
3. Convert input tensors, loss criterion and model to XPU.
1624

1725
The complete examples for Float32 and BFloat16 training on single-instance are illustrated in the sections.
1826

@@ -39,131 +47,223 @@ with torch.xpu.amp.autocast(enabled=True, dtype=torch.bfloat16):
3947
...
4048
```
4149

42-
#### Complete - Float32 Example
50+
##### Complete - Float32 Example
4351

4452
[//]: # (marker_train_single_fp32_complete)
4553
[//]: # (marker_train_single_fp32_complete)
4654

47-
#### Complete - BFloat16 Example
55+
##### Complete - BFloat16 Example
4856

4957
[//]: # (marker_train_single_bf16_complete)
5058
[//]: # (marker_train_single_bf16_complete)
5159

52-
## Inference
60+
### Inference
5361

54-
The `optimize` function of Intel® Extension for PyTorch\* applies optimizations to the model, bringing additional performance boosts. For both computer vision workloads and NLP workloads, we recommend applying the `optimize` function against the model object.
62+
Get additional performance boosts for your computer vision and NLP workloads by
63+
applying the Intel® Extension for PyTorch\* `optimize` function against your
64+
model object.
5565

56-
### Float32
66+
#### Float32
5767

58-
#### Imperative Mode
68+
##### Imperative Mode
5969

60-
##### Resnet50
70+
###### Resnet50
6171

6272
[//]: # (marker_inf_rn50_imp_fp32)
6373
[//]: # (marker_inf_rn50_imp_fp32)
6474

65-
##### BERT
75+
###### BERT
6676

6777
[//]: # (marker_inf_bert_imp_fp32)
6878
[//]: # (marker_inf_bert_imp_fp32)
6979

70-
#### TorchScript Mode
80+
##### TorchScript Mode
7181

72-
We recommend you take advantage of Intel® Extension for PyTorch\* with [TorchScript](https://pytorch.org/docs/stable/jit.html) for further optimizations.
82+
We recommend using Intel® Extension for PyTorch\* with [TorchScript](https://pytorch.org/docs/stable/jit.html) for further optimizations.
7383

74-
##### Resnet50
84+
###### Resnet50
7585

7686
[//]: # (marker_inf_rn50_ts_fp32)
7787
[//]: # (marker_inf_rn50_ts_fp32)
7888

79-
##### BERT
89+
###### BERT
8090

8191
[//]: # (marker_inf_bert_ts_fp32)
8292
[//]: # (marker_inf_bert_ts_fp32)
8393

84-
### BFloat16
94+
#### BFloat16
8595

86-
Similar to running with Float32, the `optimize` function also works for BFloat16 data type. The only difference is setting `dtype` parameter to `torch.bfloat16`.
96+
The `optimize` function works for both Float32 and BFloat16 data type. For BFloat16 data type, set the `dtype` parameter to `torch.bfloat16`.
8797
We recommend using Auto Mixed Precision (AMP) with BFloat16 data type.
8898

8999

90-
#### Imperative Mode
100+
##### Imperative Mode
91101

92-
##### Resnet50
102+
###### Resnet50
93103

94104
[//]: # (marker_inf_rn50_imp_bf16)
95105
[//]: # (marker_inf_rn50_imp_bf16)
96106

97-
##### BERT
107+
###### BERT
98108

99109
[//]: # (marker_inf_bert_imp_bf16)
100110
[//]: # (marker_inf_bert_imp_bf16)
101111

102-
#### TorchScript Mode
112+
##### TorchScript Mode
103113

104-
We recommend you take advantage of Intel® Extension for PyTorch\* with [TorchScript](https://pytorch.org/docs/stable/jit.html) for further optimizations.
114+
We recommend using Intel® Extension for PyTorch\* with [TorchScript](https://pytorch.org/docs/stable/jit.html) for further optimizations.
105115

106-
##### Resnet50
116+
###### Resnet50
107117

108118
[//]: # (marker_inf_rn50_ts_bf16)
109119
[//]: # (marker_inf_rn50_ts_bf16)
110120

111-
##### BERT
121+
###### BERT
112122

113123
[//]: # (marker_inf_bert_ts_bf16)
114124
[//]: # (marker_inf_bert_ts_bf16)
115125

116-
### Float16
126+
#### Float16
117127

118-
Similar to running with Float32, the `optimize` function also works for Float16 data type. The only difference is setting `dtype` parameter to `torch.float16`.
128+
The `optimize` function works for both Float32 and Float16 data type. For Float16 data type, set the `dtype` parameter to `torch.float16`.
119129
We recommend using Auto Mixed Precision (AMP) with Float16 data type.
120130

121-
#### Imperative Mode
131+
##### Imperative Mode
122132

123-
##### Resnet50
133+
###### Resnet50
124134

125135
[//]: # (marker_inf_rn50_imp_fp16)
126136
[//]: # (marker_inf_rn50_imp_fp16)
127137

128-
##### BERT
138+
###### BERT
129139

130140
[//]: # (marker_inf_bert_imp_fp16)
131141
[//]: # (marker_inf_bert_imp_fp16)
132142

133-
#### TorchScript Mode
143+
##### TorchScript Mode
134144

135-
We recommend you take advantage of Intel® Extension for PyTorch\* with [TorchScript](https://pytorch.org/docs/stable/jit.html) for further optimizations.
145+
We recommend using Intel® Extension for PyTorch\* with [TorchScript](https://pytorch.org/docs/stable/jit.html) for further optimizations.
136146

137-
##### Resnet50
147+
###### Resnet50
138148

139149
[//]: # (marker_inf_rn50_ts_fp16)
140150
[//]: # (marker_inf_rn50_ts_fp16)
141151

142-
##### BERT
152+
###### BERT
143153

144154
[//]: # (marker_inf_bert_ts_fp16)
145155
[//]: # (marker_inf_bert_ts_fp16)
146156

147-
### INT8
157+
#### INT8
148158

149-
We recommend to use TorchScript for INT8 model due to it has wider support for models. Moreover, TorchScript mode would auto enable our optimizations. For TorchScript INT8 model, inserting observer and model quantization is achieved through `prepare_jit` and `convert_jit` separately. Calibration process is required for collecting statistics from real data. After conversion, optimizations like operator fusion would be auto enabled.
159+
We recommend you use TorchScript for INT8 model because it has wider support for models. TorchScript mode also auto-enables our optimizations. For TorchScript INT8 model, inserting observer and model quantization is achieved through `prepare_jit` and `convert_jit` separately. Calibration process is required for collecting statistics from real data. After conversion, optimizations such as operator fusion would be auto-enabled.
150160

151161
[//]: # (marker_int8_static)
152162
[//]: # (marker_int8_static)
153163

154-
### torch.xpu.optimize
164+
#### torch.xpu.optimize
155165

156-
`torch.xpu.optimize` is an alternative of `ipex.optimize` in Intel® Extension for PyTorch\*, to provide identical usage for XPU device only. The motivation of adding this alias is to unify the coding style in user scripts base on torch.xpu modular. Refer to below example for usage.
157-
158-
#### ResNet50 FP32 imperative inference
166+
The `torch.xpu.optimize` function is an alternative to `ipex.optimize` in Intel® Extension for PyTorch\*, and provides identical usage for XPU devices only. The motivation for adding this alias is to unify the coding style in user scripts base on `torch.xpu` modular. Refer to the example below for usage.
159167

160168
[//]: # (marker_inf_rn50_imp_fp32_alt)
161169
[//]: # (marker_inf_rn50_imp_fp32_alt)
162170

163171
## C++
164172

165-
Intel® Extension for PyTorch\* provides its C++ dynamic library to allow users to implement custom DPC++ kernels to run on the XPU device. Refer to the [DPC++ extension](./features/DPC++_Extension.md) for the details.
173+
To work with libtorch, the PyTorch C++ library, Intel® Extension for PyTorch\* provides its own C++ dynamic library. The C++ library only handles inference workloads, such as service deployment. For regular development, use the Python interface. Unlike using libtorch, no specific code changes are required. Compilation follows the recommended methodology with CMake. Detailed instructions can be found in the [PyTorch tutorial](https://pytorch.org/tutorials/advanced/cpp_export.html#depending-on-libtorch-and-building-the-application).
174+
175+
During compilation, Intel optimizations will be activated automatically after the C++ dynamic library of Intel® Extension for PyTorch\* is linked.
176+
177+
The example code below works for all data types.
178+
179+
### Basic Usage
180+
181+
**example-app.cpp**
182+
183+
[//]: # (marker_cppsdk_sample_app)
184+
[//]: # (marker_cppsdk_sample_app)
185+
186+
**CMakeLists.txt**
187+
188+
[//]: # (marker_cppsdk_cmake_app)
189+
[//]: # (marker_cppsdk_cmake_app)
190+
191+
**Command for compilation**
192+
193+
```bash
194+
$ cd examples/gpu/inference/cpp/example-app
195+
$ mkdir build
196+
$ cd build
197+
$ CC=icx CXX=icpx cmake -DCMAKE_PREFIX_PATH=<LIBPYTORCH_PATH> ..
198+
$ make
199+
```
200+
201+
If *Found IPEX* is shown as dynamic library paths, the extension was linked into the binary. This can be verified with the Linux command *ldd*.
202+
203+
```bash
204+
$ CC=icx CXX=icpx cmake -DCMAKE_PREFIX_PATH=/workspace/libtorch ..
205+
-- The C compiler identification is IntelLLVM 2023.2.0
206+
-- The CXX compiler identification is IntelLLVM 2023.2.0
207+
-- Detecting C compiler ABI info
208+
-- Detecting C compiler ABI info - done
209+
-- Check for working C compiler: /workspace/intel/oneapi/compiler/2023.2.0/linux/bin/icx - skipped
210+
-- Detecting C compile features
211+
-- Detecting C compile features - done
212+
-- Detecting CXX compiler ABI info
213+
-- Detecting CXX compiler ABI info - done
214+
-- Check for working CXX compiler: /workspace/intel/oneapi/compiler/2023.2.0/linux/bin/icpx - skipped
215+
-- Detecting CXX compile features
216+
-- Detecting CXX compile features - done
217+
-- Looking for pthread.h
218+
-- Looking for pthread.h - found
219+
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
220+
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
221+
-- Found Threads: TRUE
222+
-- Found Torch: /workspace/libtorch/lib/libtorch.so
223+
-- Found IPEX: /workspace/libtorch/lib/libintel-ext-pt-cpu.so;/workspace/libtorch/lib/libintel-ext-pt-gpu.so
224+
-- Configuring done
225+
-- Generating done
226+
-- Build files have been written to: examples/gpu/inference/cpp/example-app/build
227+
228+
$ ldd example-app
229+
...
230+
libtorch.so => /workspace/libtorch/lib/libtorch.so (0x00007fd5bb927000)
231+
libc10.so => /workspace/libtorch/lib/libc10.so (0x00007fd5bb895000)
232+
libtorch_cpu.so => /workspace/libtorch/lib/libtorch_cpu.so (0x00007fd5a44d8000)
233+
libintel-ext-pt-cpu.so => /workspace/libtorch/lib/libintel-ext-pt-cpu.so (0x00007fd5a1a1b000)
234+
libintel-ext-pt-gpu.so => /workspace/libtorch/lib/libintel-ext-pt-gpu.so (0x00007fd5862b0000)
235+
...
236+
libmkl_intel_lp64.so.2 => /workspace/intel/oneapi/mkl/2023.2.0/lib/intel64/libmkl_intel_lp64.so.2 (0x00007fd584ab0000)
237+
libmkl_core.so.2 => /workspace/intel/oneapi/mkl/2023.2.0/lib/intel64/libmkl_core.so.2 (0x00007fd5806cc000)
238+
libmkl_gnu_thread.so.2 => /workspace/intel/oneapi/mkl/2023.2.0/lib/intel64/libmkl_gnu_thread.so.2 (0x00007fd57eb1d000)
239+
libmkl_sycl.so.3 => /workspace/intel/oneapi/mkl/2023.2.0/lib/intel64/libmkl_sycl.so.3 (0x00007fd55512c000)
240+
libOpenCL.so.1 => /workspace/intel/oneapi/compiler/2023.2.0/linux/lib/libOpenCL.so.1 (0x00007fd55511d000)
241+
libsvml.so => /workspace/intel/oneapi/compiler/2023.2.0/linux/compiler/lib/intel64_lin/libsvml.so (0x00007fd553b11000)
242+
libirng.so => /workspace/intel/oneapi/compiler/2023.2.0/linux/compiler/lib/intel64_lin/libirng.so (0x00007fd553600000)
243+
libimf.so => /workspace/intel/oneapi/compiler/2023.2.0/linux/compiler/lib/intel64_lin/libimf.so (0x00007fd55321b000)
244+
libintlc.so.5 => /workspace/intel/oneapi/compiler/2023.2.0/linux/compiler/lib/intel64_lin/libintlc.so.5 (0x00007fd553a9c000)
245+
libsycl.so.6 => /workspace/intel/oneapi/compiler/2023.2.0/linux/lib/libsycl.so.6 (0x00007fd552f36000)
246+
...
247+
```
248+
249+
### Use SYCL codes
250+
251+
Using SYCL codes in an C++ application is also possible. The example below shows how to invoke SYCL codes. You need to explicitly pass `-fsycl` into `CMAKE_CXX_FLAGS`.
252+
253+
**example-usm.cpp**
254+
255+
[//]: # (marker_cppsdk_sample_usm)
256+
[//]: # (marker_cppsdk_sample_usm)
257+
258+
**CMakeLists.txt**
259+
260+
[//]: # (marker_cppsdk_cmake_usm)
261+
[//]: # (marker_cppsdk_cmake_usm)
262+
263+
### Customize DPC++ kernels
264+
265+
Intel® Extension for PyTorch\* provides its C++ dynamic library to allow users to implement custom DPC++ kernels to run on the XPU device. Refer to the [DPC++ extension](./features/DPC++_Extension.md) for details.
166266

167267
## Model Zoo
168268

169-
Use cases that had already been optimized by Intel engineers are available at [Model Zoo for Intel® Architecture](https://github.com/IntelAI/models/tree/v2.11.0). A bunch of PyTorch use cases for benchmarking are also available on the [GitHub page](https://github.com/IntelAI/models/tree/v2.11.0#use-cases). Models verified on Intel dGPUs are marked in `Model Documentation` Column. You can get performance benefits out-of-box by simply running scipts in the Model Zoo.
269+
Use cases that have already been optimized by Intel engineers are available at [Model Zoo for Intel® Architecture](https://github.com/IntelAI/models/tree/v2.12.0). A number of PyTorch use cases for benchmarking are also available on the [GitHub page](https://github.com/IntelAI/models/tree/v2.12.0#use-cases). Models verified on Intel GPUs are marked in the `Model Documentation` Column. You can get performance benefits out-of-box by simply running scripts in the Model Zoo.

0 commit comments

Comments
 (0)