Skip to content

Commit 15520f2

Browse files
Update prototype_source/pt2e_quant_qat_x86_inductor.rst
Co-authored-by: Svetlana Karslioglu <svekars@meta.com>
1 parent c77e969 commit 15520f2

File tree

1 file changed

+4
-5
lines changed

1 file changed

+4
-5
lines changed

prototype_source/pt2e_quant_qat_x86_inductor.rst

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -11,11 +11,10 @@ Prerequisites
1111
- `TorchInductor and torch.compile concepts in PyTorch <https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html>`_
1212

1313

14-
This tutorial demonstrates the process of performing PT2 export quantization-aware training (QAT) on X86 CPU
15-
with X86InductorQuantizer, and subsequently lowering the quantized model into Inductor.
16-
For more comprehensive details about PyTorch 2 Export Quantization-Aware Training in general, please refer to the
17-
dedicated tutorial on `PyTorch 2 Export Quantization-Aware Training <https://pytorch.org/tutorials/prototype/pt2e_quant_qat.html>`_.
18-
For a deeper understanding of X86InductorQuantizer, please consult the tutorial of
14+
This tutorial demonstrates the process of performing PT2 export Quantization-Aware Training (QAT) on X86 CPU
15+
using X86InductorQuantizer and subsequently lowering the quantized model into Inductor.
16+
For a more in-depth understanding of PT2 Export Quantization-Aware Training, we recommend referring to the dedicated `PyTorch 2 Export Quantization-Aware Training <https://pytorch.org/tutorials/prototype/pt2e_quant_qat.html>`_.
17+
To gain a deeper insight into X86InductorQuantizer, please see the tutorial of
1918
`PyTorch 2 Export Post Training Quantization with X86 Backend through Inductor <https://pytorch.org/tutorials/prototype/pt2e_quant_ptq_x86_inductor.html>`_.
2019

2120
The PyTorch 2 Export QAT flow looks like the following—it is similar

0 commit comments

Comments
 (0)