Skip to content

Commit 01d2270

Browse files
authored
Update dynamic_quantization.py (#3060)
1 parent be7f1b3 commit 01d2270

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

recipes_source/recipes/dynamic_quantization.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -162,7 +162,7 @@ def forward(self,inputs,hidden):
162162
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
163163
#
164164
# Now we get to the fun part. First we create an instance of the model
165-
# called ``float\_lstm`` then we are going to quantize it. We're going to use
165+
# called ``float_lstm`` then we are going to quantize it. We're going to use
166166
# the `torch.quantization.quantize_dynamic <https://pytorch.org/docs/stable/quantization.html#torch.quantization.quantize_dynamic>`__ function, which takes the model, then a list of the submodules
167167
# which we want to
168168
# have quantized if they appear, then the datatype we are targeting. This

0 commit comments

Comments
 (0)