Skip to content

Commit 0903126

Browse files
Fix formatting issues with BERT tutorial to ensure hyperlinks are rendered correctly.
1 parent a800c77 commit 0903126

File tree

1 file changed

+23
-27
lines changed

1 file changed

+23
-27
lines changed

intermediate_source/dynamic_quantization_bert_tutorial.py

Lines changed: 23 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -35,22 +35,20 @@
3535
# are quantized dynamically (per batch) to int8 when the weights are
3636
# quantized to int8.
3737
#
38-
# In PyTorch, we have ``torch.quantization.quantize_dynamic`` API support
39-
# (https://pytorch.org/docs/stable/quantization.html#torch.quantization.quantize_dynamic),
40-
# which replaces specified modules with dynamic weight-only quantized
38+
# In PyTorch, we have `torch.quantization.quantize_dynamic API
39+
# <https://pytorch.org/docs/stable/quantization.html#torch.quantization.quantize_dynamic>`_
40+
# ,which replaces specified modules with dynamic weight-only quantized
4141
# versions and output the quantized model.
4242
#
4343
# - We demonstrate the accuracy and inference performance results on the
44-
# Microsoft Research Paraphrase Corpus (MRPC) task
45-
# (https://www.microsoft.com/en-us/download/details.aspx?id=52398) in
46-
# the General Language Understanding Evaluation benchmark (GLUE)
47-
# (https://gluebenchmark.com/). The MRPC (Dolan and Brockett, 2005) is
44+
# `Microsoft Research Paraphrase Corpus (MRPC) task <https://www.microsoft.com/en-us/download/details.aspx?id=52398>`_
45+
# in the General Language Understanding Evaluation benchmark `(GLUE)
46+
# <https://gluebenchmark.com/>`_. The MRPC (Dolan and Brockett, 2005) is
4847
# a corpus of sentence pairs automatically extracted from online news
4948
# sources, with human annotations of whether the sentences in the pair
5049
# are semantically equivalent. Because the classes are imbalanced (68%
5150
# positive, 32% negative), we follow common practice and report both
52-
# accuracy and F1 score
53-
# (https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html).
51+
# accuracy and `F1 score <https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html>`_
5452
# MRPC is a common NLP task for language pair classification, as shown
5553
# below.
5654
#
@@ -78,8 +76,10 @@
7876
#
7977
# To start this tutorial, let’s first follow the installation instructions
8078
# in PyTorch and HuggingFace Github Repo: -
81-
# https://github.com/pytorch/pytorch/#installation -
82-
# https://github.com/huggingface/transformers#installation
79+
#
80+
# * https://github.com/pytorch/pytorch/#installation -
81+
#
82+
# * https://github.com/huggingface/transformers#installation
8383
#
8484
# In addition, we also install ``sklearn`` package, as we will reuse its
8585
# built-in F1 score calculation helper function.
@@ -93,8 +93,8 @@
9393
######################################################################
9494
# Because we will be using the experimental parts of the PyTorch, it is
9595
# recommended to install the latest version of torch and torchvision. You
96-
# can find the most recent instructions on local installation here
97-
# https://pytorch.org/get-started/locally/. For example, to install on
96+
# can find the most recent instructions on local installation `here
97+
# <https://pytorch.org/get-started/locally/>`_. For example, to install on
9898
# Mac:
9999
#
100100
# .. code:: shell
@@ -149,10 +149,10 @@
149149
# Download the dataset
150150
# --------------------
151151
#
152-
# Before running MRPC tasks we download the GLUE data
153-
# (https://gluebenchmark.com/tasks) by running this script
154-
# (https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e,
155-
# https://github.com/nyu-mll/GLUE-baselines/blob/master/download_glue_data.py)
152+
# Before running MRPC tasks we download the `GLUE data
153+
# <https://gluebenchmark.com/tasks>`_ by running this `script
154+
# <https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e>`_ followed by
155+
# `download_glue_data <https://github.com/nyu-mll/GLUE-baselines/blob/master/download_glue_data.py>`_.
156156
# and unpack it to some directory “glue_data/MRPC”.
157157
#
158158

@@ -176,8 +176,7 @@
176176
# Convert the texts into features
177177
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
178178
#
179-
# glue_convert_examples_to_features (
180-
# https://github.com/huggingface/transformers/blob/master/transformers/data/processors/glue.py)
179+
# `glue_convert_examples_to_features <https://github.com/huggingface/transformers/blob/master/transformers/data/processors/glue.py>`_.
181180
# load a data file into a list of ``InputFeatures``.
182181
#
183182
# - Tokenize the input sequences;
@@ -190,8 +189,7 @@
190189
# F1 metric
191190
# ~~~~~~~~~
192191
#
193-
# The F1 score
194-
# (https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html)
192+
# The `F1 score <https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html>`_
195193
# can be interpreted as a weighted average of the precision and recall,
196194
# where an F1 score reaches its best value at 1 and worst score at 0. The
197195
# relative contribution of precision and recall to the F1 score are equal.
@@ -217,7 +215,7 @@
217215
#
218216
# To fine-tune the pre-trained BERT model (“bert-base-uncased” model in
219217
# HuggingFace transformers) for the MRPC task, you can follow the command
220-
# in (https://github.com/huggingface/transformers/tree/master/examples):
218+
# in `examples<https://github.com/huggingface/transformers/tree/master/examples>`_"
221219
#
222220
# ::
223221
#
@@ -333,10 +331,8 @@ def set_seed(seed):
333331
# Define the tokenize and evaluation function
334332
# -------------------------------------------
335333
#
336-
# We reuse the tokenize and evaluation function from
337-
# https://github.com/huggingface/transformers/blob/master/examples/run_glue.py.
334+
# We reuse the tokenize and evaluation function from `huggingface <https://github.com/huggingface/transformers/blob/master/examples/run_glue.py>`_.
338335
#
339-
340336
# coding=utf-8
341337
# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.
342338
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
@@ -598,8 +594,8 @@ def time_model_evaluation(model, configs, tokenizer):
598594
# set multi-thread by ``torch.set_num_threads(N)`` (``N`` is the number of
599595
# intra-op parallelization threads). One preliminary requirement to enable
600596
# the intra-op parallelization support is to build PyTorch with the right
601-
# backend such as OpenMP, Native, or TBB
602-
# (https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html#build-options).
597+
# `backend <https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.html#build-options>`_
598+
# such as OpenMP, Native or TBB.
603599
# You can use ``torch.__config__.parallel_info()`` to check the
604600
# parallelization settings. On the same MacBook Pro using PyTorch with
605601
# Native backend for parallelization, we can get about 46 seconds for

0 commit comments

Comments
 (0)