Skip to content

Commit a608d00

Browse files
committed
add conclusion
1 parent 47ae25d commit a608d00

File tree

1 file changed

+15
-1
lines changed

1 file changed

+15
-1
lines changed

recipes_source/amx.rst

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,21 @@ For example, get oneDNN verbose:
113113
onednn_verbose,exec,cpu,matmul,brg:avx512_core_amx_int8,undef,src_s8::blocked:ab:f0 wei_s8:p:blocked:BA16a64b4a:f0 dst_s8::blocked:ab:f0,attr-scratchpad:user ,,1x30522:30522x768:1x768,7.66382
114114
...
115115

116-
If we get the verbose of ``avx512_core_amx_bf16`` for BFloat16 or ``avx512_core_amx_int8`` for quantization with INT8, it indicates that AMX is activated.
116+
If you get the verbose of ``avx512_core_amx_bf16`` for BFloat16 or ``avx512_core_amx_int8`` for quantization with INT8, it indicates that AMX is activated.
117+
118+
119+
Conclusion
120+
----------
121+
122+
123+
In this tutorial, we briefly introduced AMX, how to utilize AMX in PyTorch to accelerate workloads, and how to confirm that AMX is being utilized.
124+
125+
With the improvements and updates of PyTorch and oneDNN, the utilization of AMX may be subject to change accordingly.
126+
127+
As always, if you run into any problems or have any questions, you can use
128+
`forum <https://discuss.pytorch.org/>`_ or `GitHub issues
129+
<https://github.com/pytorch/pytorch/issues>`_ to get in touch.
130+
117131

118132
.. _Accelerate AI Workloads with Intel® AMX: https://www.intel.com/content/www/us/en/products/docs/accelerator-engines/advanced-matrix-extensions/ai-solution-brief.html
119133

0 commit comments

Comments
 (0)