We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 5a12c37 commit f8f1abdCopy full SHA for f8f1abd
prototype_source/pt2e_quant_ptq_static.rst
@@ -510,7 +510,7 @@ Now we can compare the size and model accuracy with baseline model.
510
511
.. note::
512
The weights are still in fp32 right now, we may do constant propagation for quantize op to
513
- get integer weights in the future
+ get integer weights in the future.
514
515
If you want to get better accuracy or performance, try configuring
516
``quantizer`` in different ways, and each ``quantizer`` will have its own way
0 commit comments