Skip to content

Commit ba0a7a1

Browse files
author
Seth Weidman
committed
Address Raghu's static quantization comments
1 parent ae93711 commit ba0a7a1

File tree

1 file changed

+5
-1
lines changed

1 file changed

+5
-1
lines changed

advanced_source/static_quantization_tutorial.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -371,7 +371,7 @@ def prepare_data_loaders(data_path):
371371
print('\n Inverted Residual Block: Before fusion \n\n', float_model.features[1].conv)
372372
float_model.eval()
373373

374-
# Fusion is optional
374+
# Fuses modules
375375
float_model.fuse_model()
376376

377377
# Note fusion of Conv+BN+Relu and Conv+Relu
@@ -567,6 +567,10 @@ def train_one_epoch(model, criterion, optimizer, data_loader, device, ntrain_bat
567567
print('Epoch %d :Evaluation accuracy on %d images, %2.2f'%(nepoch, num_eval_batches * eval_batch_size, top1.avg))
568568

569569
#####################################################################
570+
# Here, we just perform quantization-aware training for a small number of epochs. Nevertheless,
571+
# quantization-aware training yields an accuracy of over 71% on the entire imagenet dataset,
572+
# which is close to the floating point accuracy of 71.9%.
573+
#
570574
# More on quantization-aware training:
571575
#
572576
# - QAT is a super-set of post training quant techniques that allows for more debugging.

0 commit comments

Comments
 (0)