Skip to content

Adding 2 new quant tutorials #757

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 35 commits into from
Dec 7, 2019
Merged

Adding 2 new quant tutorials #757

merged 35 commits into from
Dec 7, 2019

Conversation

jlin27
Copy link
Contributor

@jlin27 jlin27 commented Dec 6, 2019

jlin27 and others added 24 commits December 3, 2019 16:36
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:
Update Dynamic Quant for BERT tutorial
Update reference to bert_mrpc image saved in _static/img/bert_mrpc.png:

https://github.com/pytorch/tutorials/blob/master/_static/img/bert_mrpc.png
Fix formatting and clean up tutorial on quantized transfer learning
Minor changes in the quantized transfer learning tutorial
Update Dynamic Quant BERT Tutorial 2
Update Dynamic Quant BERT Tutorial 3
@netlify
Copy link

netlify bot commented Dec 6, 2019

Deploy preview for pytorch-tutorials-preview ready!

Built with commit 22f7fa4

https://deploy-preview-757--pytorch-tutorials-preview.netlify.com

Comment on lines +449 to +460
# notice `quantize=False`
model = models.resnet18(pretrained=True, progress=True, quantize=False)
num_ftrs = model.fc.in_features

# Step 1
model.train()
model.fuse_model()
# Step 2
model_ft = create_combined_model(model)
model_ft[0].qconfig = torch.quantization.default_qat_qconfig # Use default QAT configuration
# Step 3
model_ft = torch.quantization.prepare_qat(model_ft, inplace=True)
Copy link
Contributor

@z-a-f z-a-f Dec 6, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.. code:: python

    # notice `quantize=False`
    model = models.resnet18(pretrained=True, progress=True, quantize=False)
    num_ftrs = model.fc.in_features

    # Step 1
    model.train()
    model.fuse_model()
    # Step 2
    model_ft = create_combined_model(model)
    model_ft[0].qconfig = torch.quantization.default_qat_qconfig  # Use default QAT configuration
    # Step 3
    model_ft = torch.quantization.prepare_qat(model_ft, inplace=True)

the results shown in this tutorial.


# notice `quantize=False`

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This part from 449-451 needs to be replaced with actual code:

# Step 1 
model.train() 
model.fuse_model() 
# Step 2 
model_ft = create_combined_model(model)
model_ft[0].qconfig = torch.quantization.default_qat_qconfig 
# Use default QAT configuration 
# Step 3 
model_ft = torch.quantization.prepare_qat(model_ft, inplace=True)

inserts fake-quantization modules.


As step (4), you can start "finetuning" the model, and after that convert

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor comment, suggest replacing with:
Once steps 1-3 are done, we can start fine tuning the model (step 4), followed by converting the model into a quantized. (step 5).

python download_glue_data.py --data_dir='glue_data' --tasks='MRPC'


1.4 Learn about helper functions
Copy link

@raghuramank100 raghuramank100 Dec 6, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest merging this section (1.4) with 1.2 as we are explaining what we are doing in 1.2 here, suggested change is shown above.

logging.WARN) # Reduce logging

print(torch.__version__)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The helper functions are built-in in transformers library. We mainly use
the following helper functions: one for converting the text examples
into the feature vectors; The other one for measuring the F1 score of
the predicted result.

The glue_convert_examples_to_features <https://github.com/huggingface/transformers/blob/master/transformers/data/processors/glue.py>_ function converts the texts into input features:

  • Tokenize the input sequences;
  • Insert [CLS] in the beginning;
  • Insert [SEP] between the first sentence and the second sentence, and
    in the end;
  • Generate token type ids to indicate whether a token belongs to the
    first sequence or the second sequence.

The glue_compute_metrics function computes the F1 score <https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html>_
can be interpreted as a weighted average of the precision and recall,
where an F1 score reaches its best value at 1 and worst score at 0. The
relative contribution of precision and recall to the F1 score are equal.

  • The equation for the F1 score is:
    .. math:: F1 = 2 * (\text{precision} * \text{recall}) / (\text{precision} + \text{recall})

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jianyuh : Does this look ok?

Copy link

@raghuramank100 raghuramank100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Almost there! Few comments

@jlin27 jlin27 merged commit 6252688 into master Dec 7, 2019
@jlin27 jlin27 deleted the jlin27-quant-tutorials branch June 10, 2020 20:27
rodrigo-techera pushed a commit to Experience-Monks/tutorials that referenced this pull request Nov 29, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants