Skip to content

Commit fead948

Browse files
author
Svetlana Karslioglu
authored
Update pt2e_quant_ptq_static.rst
1 parent a8121c0 commit fead948

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

prototype_source/pt2e_quant_ptq_static.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -302,6 +302,7 @@ For post training quantization, we'll need to set model to the eval mode.
302302
``Quantizer`` is backend specific, and each ``Quantizer`` will provide their own way to allow users to configure their model. Just as an example, here is the different configuration APIs supported by XNNPackQuantizer:
303303

304304
.. code:: python
305+
305306
quantizer.set_global(qconfig_opt) # qconfig_opt is an optional qconfig, either a valid qconfig or None
306307
.set_object_type(torch.nn.Conv2d, qconfig_opt) # can be a module type
307308
.set_object_type(torch.nn.functional.linear, qconfig_opt) # or torch functional op
@@ -396,6 +397,7 @@ quantization you are using to learn more about how you can have more control ove
396397
We'll show how to save and load the quantized model.
397398

398399
.. code:: python
400+
399401
# 1. Save state_dict
400402
pt2e_quantized_model_file_path = saved_model_dir + "resnet18_pt2e_quantized.pth"
401403
torch.save(quantized_model.state_dict(), pt2e_quantized_model_file_path)
@@ -434,6 +436,7 @@ We'll show how to save and load the quantized model.
434436
435437
11. Debugging Quantized Model
436438
----------------------------
439+
437440
We have `Numeric Suite <https://pytorch.org/docs/stable/quantization-accuracy-debugging.html#numerical-debugging-tooling-prototype>`_ that can help with debugging in eager mode and FX graph mode. The new version of Numeric Suite working with PyTorch 2.0 Export models is still in development.
438441

439442
12. Lowering and Performance Evaluation

0 commit comments

Comments
 (0)