Skip to content

Commit 69940ba

Browse files
committed
[quant][2.1] fix formatting for the tutorial
Summary: . Test Plan: visual inspection of generated docs Reviewers: Subscribers: Tasks: Tags:
1 parent e8e93ba commit 69940ba

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

prototype_source/pt2e_quant_ptq_static.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -37,13 +37,13 @@ this:
3737
| convert_pt2e |
3838
—--------------------------------------------------------
3939
|
40-
Reference Quantized Model
40+
Quantized Model
4141
|
4242
—--------------------------------------------------------
4343
| Lowering |
4444
—--------------------------------------------------------
4545
|
46-
Executorch, or Inductor, or <Other Backends>
46+
Executorch, Inductor or <Other Backends>
4747

4848

4949
The PyTorch 2.0 export quantization API looks like this:
@@ -375,15 +375,15 @@ The following code snippets describes how to quantize the model:
375375
get_symmetric_quantization_config,
376376
)
377377
quantizer = XNNPACKQuantizer()
378-
quantizer.set_globa(get_symmetric_quantization_config())
378+
quantizer.set_global(get_symmetric_quantization_config())
379379
380380
``Quantizer`` is backend specific, and each ``Quantizer`` will provide their
381381
own way to allow users to configure their model. Just as an example, here is
382382
the different configuration APIs supported by ``XNNPackQuantizer``:
383383

384384
.. code-block:: python
385385
386-
quantizer.set_global(qconfig_opt) # qconfig_opt is an optional qconfig, either a valid qconfig or None
386+
quantizer.set_global(qconfig_opt) # qconfig_opt is an optional quantization config
387387
.set_object_type(torch.nn.Conv2d, qconfig_opt) # can be a module type
388388
.set_object_type(torch.nn.functional.linear, qconfig_opt) # or torch functional op
389389
.set_module_name("foo.bar", qconfig_opt)

0 commit comments

Comments
 (0)