We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 5fdf751 commit 917cad5Copy full SHA for 917cad5
advanced_source/dynamic_quantization_tutorial.py
@@ -13,7 +13,8 @@
13
to int, which can result in smaller model size and faster inference with only a small
14
hit to accuracy.
15
16
-In this tutorial, we'll apply the easiest form of quantization - _dynamic quantization_ -
+In this tutorial, we'll apply the easiest form of quantization -
17
+`dynamic quantization <https://pytorch.org/docs/stable/quantization.html#torch.quantization.quantize_dynamic>`_ -
18
to an LSTM-based next word-prediction model, closely following the
19
`word language model <https://github.com/pytorch/examples/tree/master/word_language_model>`_
20
from the PyTorch examples.
0 commit comments