Skip to content

Commit ab73472

Browse files
author
Jessica Lin
authored
Bring headers down one level
1 parent f34c8d4 commit ab73472

File tree

1 file changed

+7
-9
lines changed

1 file changed

+7
-9
lines changed

intermediate_source/dynamic_quantization_bert_tutorial.py

Lines changed: 7 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313

1414
######################################################################
1515
# Introduction
16-
# ============
16+
# ------------
1717
#
1818
# In this tutorial, we will apply the dynamic quantization on a BERT
1919
# model, closely following the BERT model from the HuggingFace
@@ -71,10 +71,10 @@
7171

7272
######################################################################
7373
# Setup
74-
# =====
74+
# -------
7575
#
7676
# Install PyTorch and HuggingFace Transformers
77-
# --------------------------------------------
77+
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
7878
#
7979
# To start this tutorial, let’s first follow the installation instructions
8080
# in PyTorch and HuggingFace Github Repo: -
@@ -203,7 +203,7 @@
203203

204204
######################################################################
205205
# Fine-tune the BERT model
206-
# ========================
206+
# --------------------------
207207
#
208208

209209

@@ -476,7 +476,7 @@ def load_and_cache_examples(args, task, tokenizer, evaluate=False):
476476

477477
######################################################################
478478
# Apply the dynamic quantization
479-
# ==============================
479+
# -------------------------------
480480
#
481481
# We call ``torch.quantization.quantize_dynamic`` on the model to apply
482482
# the dynamic quantization on the HuggingFace BERT model. Specifically,
@@ -626,7 +626,7 @@ def time_model_evaluation(model, configs, tokenizer):
626626

627627
######################################################################
628628
# Conclusion
629-
# ==========
629+
# ----------
630630
#
631631
# In this tutorial, we demonstrated how to demonstrate how to convert a
632632
# well-known state-of-the-art NLP model like BERT into dynamic quantized
@@ -641,7 +641,7 @@ def time_model_evaluation(model, configs, tokenizer):
641641

642642
######################################################################
643643
# References
644-
# ==========
644+
# -----------
645645
#
646646
# [1] J.Devlin, M. Chang, K. Lee and K. Toutanova, BERT: Pre-training of
647647
# Deep Bidirectional Transformers for Language Understanding (2018)
@@ -657,5 +657,3 @@ def time_model_evaluation(model, configs, tokenizer):
657657
######################################################################
658658
#
659659
#
660-
661-

0 commit comments

Comments
 (0)