Skip to content

Commit 2ca1927

Browse files
change the words
1 parent 2ead9c8 commit 2ca1927

File tree

1 file changed

+5
-26
lines changed

1 file changed

+5
-26
lines changed

prototype_source/pt2e_quant_x86_inductor.rst

Lines changed: 5 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ Prerequisites
77
^^^^^^^^^^^^^^^
88

99
- `PyTorch 2 Export Post Training Quantization <https://pytorch.org/tutorials/prototype/pt2e_quant_ptq.html>`_
10-
- `PyTorch 2 Export Quantization-Aware Training tutorial <https://pytorch.org/tutorials/prototype/pt2e_quant_qat.html>`_
10+
- `PyTorch 2 Export Quantization-Aware Training <https://pytorch.org/tutorials/prototype/pt2e_quant_qat.html>`_
1111
- `TorchInductor and torch.compile concepts in PyTorch <https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html>`_
1212
- `Inductor C++ Wrapper concepts <https://pytorch.org/tutorials/prototype/inductor_cpp_wrapper_tutorial.html>`_
1313

@@ -17,7 +17,7 @@ Introduction
1717
This tutorial introduces the steps for utilizing the PyTorch 2 Export Quantization flow to generate a quantized model customized
1818
for the x86 inductor backend and explains how to lower the quantized model into the inductor.
1919

20-
The new quantization 2 flow uses the PT2 Export to capture the model into a graph and perform quantization transformations on top of the ATen graph.
20+
The pytorch 2 export quantization flow uses the torch.export to capture the model into a graph and perform quantization transformations on top of the ATen graph.
2121
This approach is expected to have significantly higher model coverage, better programmability, and a simplified UX.
2222
TorchInductor is the new compiler backend that compiles the FX Graphs generated by TorchDynamo into optimized C++/Triton kernels.
2323

@@ -85,8 +85,6 @@ We will start by performing the necessary imports, capturing the FX Graph from t
8585
model = models.__dict__[model_name](pretrained=True)
8686

8787
# Set the model to eval mode
88-
# Only apply it for post-training static quantization
89-
# Skip this step for quantization-aware training
9088
model = model.eval()
9189

9290
# Create the data, using the dummy data here as an example
@@ -120,44 +118,26 @@ Next, we will have the FX Module to be quantized.
120118
After we capture the FX Module to be quantized, we will import the Backend Quantizer for X86 CPU and configure how to
121119
quantize the model.
122120

123-
For post-training static quantization:
124-
125121
::
126122

127123
quantizer = X86InductorQuantizer()
128124
quantizer.set_global(xiq.get_default_x86_inductor_quantization_config())
129125

130-
For quantization-aware training:
131-
132-
::
133-
134-
quantizer = X86InductorQuantizer()
135-
quantizer.set_global(xiq.get_default_x86_inductor_quantization_config(is_qat=True))
136-
137126
.. note::
138127

139128
The default quantization configuration in ``X86InductorQuantizer`` uses 8-bits for both activations and weights.
140129
When Vector Neural Network Instruction is not available, the oneDNN backend silently chooses kernels that assume
141130
`multiplications are 7-bit x 8-bit <https://oneapi-src.github.io/oneDNN/dev_guide_int8_computations.html#inputs-of-mixed-type-u8-and-s8>`_. In other words, potential
142131
numeric saturation and accuracy issue may happen when running on CPU without Vector Neural Network Instruction.
143132

144-
After we import the backend-specific Quantizer, we will prepare the model for post-training quantization or quantization-aware training.
145-
146-
For post-training static quantization, ``prepare_pt2e`` folds BatchNorm operators into preceding Conv2d operators, and inserts observers in appropriate places in the model.
133+
After we import the backend-specific Quantizer, we will prepare the model for post-training quantization.
134+
``prepare_pt2e`` folds BatchNorm operators into preceding Conv2d operators, and inserts observers in appropriate places in the model.
147135

148136
::
149137

150138
prepared_model = prepare_pt2e(exported_model, quantizer)
151139

152-
For quantization-aware training:
153-
154-
::
155-
156-
prepared_model = prepare_qat_pt2e(exported_model, quantizer)
157-
158-
159-
Now, we will do calibration for post-training static quantization or quantization-aware training. Here is the example code
160-
for post-training static quantization. The example code omits quantization-aware training for simplicity.
140+
Now, we will calibrate the ``prepared_model`` after the observers are inserted in the model.
161141

162142
::
163143

@@ -177,7 +157,6 @@ Finally, we will convert the calibrated Model to a quantized Model. ``convert_pt
177157
::
178158

179159
converted_model = convert_pt2e(prepared_model)
180-
torch.ao.quantization.move_exported_model_to_eval(converted_model)
181160

182161
After these steps, we finished running the quantization flow and we will get the quantized model.
183162

0 commit comments

Comments
 (0)