Skip to content

Commit a5d85ed

Browse files
tstatlersvekars
authored andcommitted
Fixed Rst formatting, minor text changes (#3029)
* Fixed Rst formatting, minor text changes * Removed duplicate sentence about CUDA hardware that is already mentioned in the intro text. Minor text change. --------- Co-authored-by: Svetlana Karslioglu <svekars@meta.com>
1 parent deb89ba commit a5d85ed

File tree

1 file changed

+4
-6
lines changed

1 file changed

+4
-6
lines changed

prototype_source/gpu_quantization_torchao_tutorial.py

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -35,14 +35,12 @@
3535
#
3636
# Segment Anything Model checkpoint setup:
3737
#
38-
# 1. Go to the `segment-anything repo <checkpoint https://github.com/facebookresearch/segment-anything/tree/main#model-checkpoints>`_ and download the ``vit_h`` checkpoint. Alternatively, you can just use ``wget``: `wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth --directory-prefix=<path>
38+
# 1. Go to the `segment-anything repo checkpoint <https://github.com/facebookresearch/segment-anything/tree/main#model-checkpoints>`_ and download the ``vit_h`` checkpoint. Alternatively, you can use ``wget`` (for example, ``wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth --directory-prefix=<path>``).
3939
# 2. Pass in that directory by editing the code below to say:
4040
#
41-
# .. code-block::
42-
#
43-
# {sam_checkpoint_base_path}=<path>
41+
# .. code-block:: bash
4442
#
45-
# This was run on an A100-PG509-200 power limited to 330.00 W
43+
# {sam_checkpoint_base_path}=<path>
4644
#
4745

4846
import torch
@@ -297,7 +295,7 @@ def get_sam_model(only_one_block=False, batchsize=1):
297295
# -----------------
298296
# In this tutorial, we have learned about the quantization and optimization techniques
299297
# on the example of the segment anything model.
300-
298+
#
301299
# In the end, we achieved a full-model apples to apples quantization speedup
302300
# of about 7.7% on batch size 16 (677.28ms to 729.65ms). We can push this a
303301
# bit further by increasing the batch size and optimizing other parts of

0 commit comments

Comments
 (0)