Skip to content

Commit a5e67d3

Browse files
committed
Update Dynamic Quant BERT Tutorial 4
1 parent b768571 commit a5e67d3

File tree

1 file changed

+19
-18
lines changed

1 file changed

+19
-18
lines changed

intermediate_source/dynamic_quantization_bert_tutorial.rst

Lines changed: 19 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
(experimental) Dynamic Quantization on HuggingFace BERT model
2-
==============================================================
1+
(experimental) Dynamic Quantization on BERT
2+
===========================================
33

44
**Author**: `Jianyu Huang <https://github.com/jianyuh>`_
55

@@ -128,21 +128,7 @@ In the end of the tutorial, the user can set other number of threads by building
128128
print(torch.__config__.parallel_info())
129129
130130
131-
1.3 Download the dataset
132-
^^^^^^^^^^^^^^^^^^^^^^^^
133-
134-
Before running MRPC tasks we download the `GLUE data
135-
<https://gluebenchmark.com/tasks>`_ by running `this script
136-
<https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e>`_
137-
and unpack it to a directory ``glue_data``.
138-
139-
140-
.. code:: shell
141-
142-
python download_glue_data.py --data_dir='glue_data' --tasks='MRPC'
143-
144-
145-
1.4 Learn about helper functions
131+
1.3 Learn about helper functions
146132
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
147133

148134
The helper functions are built-in in transformers library. We mainly use
@@ -159,7 +145,8 @@ The `glue_convert_examples_to_features <https://github.com/huggingface/transform
159145
- Generate token type ids to indicate whether a token belongs to the
160146
first sequence or the second sequence.
161147

162-
The `F1 score <https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html>`_
148+
The `glue_compute_metrics <https://github.com/huggingface/transformers/blob/master/transformers/data/processors/glue.py>`_ function has the compute metrics with
149+
the `F1 score <https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html>`_, which
163150
can be interpreted as a weighted average of the precision and recall,
164151
where an F1 score reaches its best value at 1 and worst score at 0. The
165152
relative contribution of precision and recall to the F1 score are equal.
@@ -168,6 +155,20 @@ relative contribution of precision and recall to the F1 score are equal.
168155
.. math:: F1 = 2 * (\text{precision} * \text{recall}) / (\text{precision} + \text{recall})
169156

170157

158+
1.4 Download the dataset
159+
^^^^^^^^^^^^^^^^^^^^^^^^
160+
161+
Before running MRPC tasks we download the `GLUE data
162+
<https://gluebenchmark.com/tasks>`_ by running `this script
163+
<https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e>`_
164+
and unpack it to a directory ``glue_data``.
165+
166+
167+
.. code:: shell
168+
169+
python download_glue_data.py --data_dir='glue_data' --tasks='MRPC'
170+
171+
171172
2. Fine-tune the BERT model
172173
---------------------------
173174

0 commit comments

Comments
 (0)