Skip to content

Convert :: to code-block directive #2737

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jan 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions advanced_source/neural_style_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@
# to 255 tensor images.
#
#
# .. Note::
# .. note::
# Here are links to download the images required to run the tutorial:
# `picasso.jpg <https://pytorch.org/tutorials/_static/img/neural-style/picasso.jpg>`__ and
# `dancing.jpg <https://pytorch.org/tutorials/_static/img/neural-style/dancing.jpg>`__.
Expand Down Expand Up @@ -183,7 +183,7 @@ def forward(self, input):
return input

######################################################################
# .. Note::
# .. note::
# **Important detail**: although this module is named ``ContentLoss``, it
# is not a true PyTorch Loss function. If you want to define your content
# loss as a PyTorch Loss function, you have to create a PyTorch autograd function
Expand Down Expand Up @@ -372,7 +372,7 @@ def get_style_model_and_losses(cnn, normalization_mean, normalization_std,
input_img = content_img.clone()
# if you want to use white noise by using the following code:
#
# ::
# .. code-block:: python
#
# input_img = torch.randn(content_img.data.size())

Expand Down
4 changes: 2 additions & 2 deletions beginner_source/blitz/neural_networks_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ def forward(self, x):
# ``.grad_fn`` attribute, you will see a graph of computations that looks
# like this:
#
# ::
# .. code-block:: sh
#
# input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d
# -> flatten -> linear -> relu -> linear -> relu -> linear
Expand Down Expand Up @@ -253,7 +253,7 @@ def forward(self, x):


###############################################################
# .. Note::
# .. note::
#
# Observe how gradient buffers had to be manually set to zero using
# ``optimizer.zero_grad()``. This is because gradients are accumulated
Expand Down
14 changes: 9 additions & 5 deletions beginner_source/data_loading_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,9 +50,9 @@
# estimation <https://blog.dlib.net/2014/08/real-time-face-pose-estimation.html>`__
# on a few images from imagenet tagged as 'face'.
#
# Dataset comes with a csv file with annotations which looks like this:
# Dataset comes with a ``.csv`` file with annotations which looks like this:
#
# ::
# .. code-block:: sh
#
# image_name,part_0_x,part_0_y,part_1_x,part_1_y,part_2_x, ... ,part_67_x,part_67_y
# 0805personali01.jpg,27,83,27,98, ... 84,134
Expand Down Expand Up @@ -196,7 +196,7 @@ def __getitem__(self, idx):
# called. For this, we just need to implement ``__call__`` method and
# if required, ``__init__`` method. We can then use a transform like this:
#
# ::
# .. code-block:: python
#
# tsfm = Transform(params)
# transformed_sample = tsfm(sample)
Expand Down Expand Up @@ -421,7 +421,9 @@ def show_landmarks_batch(sample_batched):
# and dataloader. ``torchvision`` package provides some common datasets and
# transforms. You might not even have to write custom classes. One of the
# more generic datasets available in torchvision is ``ImageFolder``.
# It assumes that images are organized in the following way: ::
# It assumes that images are organized in the following way:
#
# .. code-block:: sh
#
# root/ants/xxx.png
# root/ants/xxy.jpeg
Expand All @@ -435,7 +437,9 @@ def show_landmarks_batch(sample_batched):
#
# where 'ants', 'bees' etc. are class labels. Similarly generic transforms
# which operate on ``PIL.Image`` like ``RandomHorizontalFlip``, ``Scale``,
# are also available. You can use these to write a dataloader like this: ::
# are also available. You can use these to write a dataloader like this:
#
# .. code-block:: pytorch
#
# import torch
# from torchvision import transforms, datasets
Expand Down
2 changes: 1 addition & 1 deletion beginner_source/dcgan_faces_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -226,7 +226,7 @@
# the ``celeba`` directory you just created. The resulting directory
# structure should be:
#
# ::
# .. code-block:: sh
#
# /path/to/celeba
# -> img_align_celeba
Expand Down
2 changes: 1 addition & 1 deletion beginner_source/hyperparameter_tuning_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -462,7 +462,7 @@ def main(num_samples=10, max_num_epochs=10, gpus_per_trial=2):
######################################################################
# If you run the code, an example output could look like this:
#
# ::
# .. code-block:: sh
#
# Number of trials: 10/10 (10 TERMINATED)
# +-----+--------------+------+------+-------------+--------+---------+------------+
Expand Down
8 changes: 4 additions & 4 deletions beginner_source/introyt/autogradyt_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -213,7 +213,7 @@
#########################################################################
# Recall the computation steps we took to get here:
#
# ::
# .. code-block:: python
#
# a = torch.linspace(0., 2. * math.pi, steps=25, requires_grad=True)
# b = torch.sin(a)
Expand Down Expand Up @@ -456,10 +456,10 @@ def add_tensors2(x, y):
# .. note::
# The following code cell throws a runtime error. This is expected.
#
# ::
# .. code-block:: python
#
# a = torch.linspace(0., 2. * math.pi, steps=25, requires_grad=True)
# torch.sin_(a)
# a = torch.linspace(0., 2. * math.pi, steps=25, requires_grad=True)
# torch.sin_(a)
#

#########################################################################
Expand Down
8 changes: 6 additions & 2 deletions beginner_source/introyt/captumyt.py
Original file line number Diff line number Diff line change
Expand Up @@ -109,11 +109,15 @@
To install Captum in an Anaconda or pip virtual environment, use the
appropriate command for your environment below:

With ``conda``::
With ``conda``:

.. code-block:: sh

conda install pytorch torchvision captum flask-compress matplotlib=3.3.4 -c pytorch

With ``pip``::
With ``pip``:

.. code-block:: sh

pip install torch torchvision captum matplotlib==3.3.4 Flask-Compress

Expand Down
2 changes: 1 addition & 1 deletion beginner_source/introyt/introyt1_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -580,7 +580,7 @@ def forward(self, x):
#
# **When you run the cell above,** you should see something like this:
#
# ::
# .. code-block:: sh
#
# [1, 2000] loss: 2.235
# [1, 4000] loss: 1.940
Expand Down
8 changes: 6 additions & 2 deletions beginner_source/introyt/tensorboardyt_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,12 +24,16 @@
To run this tutorial, you’ll need to install PyTorch, TorchVision,
Matplotlib, and TensorBoard.

With ``conda``::
With ``conda``:

.. code-block:: sh

conda install pytorch torchvision -c pytorch
conda install matplotlib tensorboard

With ``pip``::
With ``pip``:

.. code-block:: sh

pip install torch torchvision matplotlib tensorboard

Expand Down
28 changes: 14 additions & 14 deletions beginner_source/introyt/tensors_deeper_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -292,14 +292,14 @@
# binary operation on tensors if dissimilar shape?
#
# .. note::
# The following cell throws a run-time error. This is intentional.
# The following cell throws a run-time error. This is intentional.
#
# ::
# .. code-block:: sh
#
# a = torch.rand(2, 3)
# b = torch.rand(3, 2)
# a = torch.rand(2, 3)
# b = torch.rand(3, 2)
#
# print(a * b)
# print(a * b)
#


Expand Down Expand Up @@ -390,17 +390,17 @@
# Here are some examples of attempts at broadcasting that will fail:
#
# .. note::
# The following cell throws a run-time error. This is intentional.
# The following cell throws a run-time error. This is intentional.
#
# ::
# .. code-block:: python
#
# a = torch.ones(4, 3, 2)
# a = torch.ones(4, 3, 2)
#
# b = a * torch.rand(4, 3) # dimensions must match last-to-first
# b = a * torch.rand(4, 3) # dimensions must match last-to-first
#
# c = a * torch.rand( 2, 3) # both 3rd & 2nd dims different
# c = a * torch.rand( 2, 3) # both 3rd & 2nd dims different
#
# d = a * torch.rand((0, )) # can't broadcast with an empty tensor
# d = a * torch.rand((0, )) # can't broadcast with an empty tensor
#


Expand Down Expand Up @@ -729,7 +729,7 @@
# following code will throw a runtime error, regardless of whether you
# have a GPU device available:
#
# ::
# .. code-block:: python
#
# x = torch.rand(2, 2)
# y = torch.rand(2, 2, device='gpu')
Expand Down Expand Up @@ -820,9 +820,9 @@
# Another place you might use ``unsqueeze()`` is to ease broadcasting.
# Recall the example above where we had the following code:
#
# ::
# .. code-block:: python
#
# a = torch.ones(4, 3, 2)
# a = torch.ones(4, 3, 2)
#
# c = a * torch.rand( 3, 1) # 3rd dim = 1, 2nd dim identical to a
# print(c)
Expand Down
16 changes: 8 additions & 8 deletions beginner_source/nn_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -328,7 +328,7 @@ def forward(self, xb):
# Previously for our training loop we had to update the values for each parameter
# by name, and manually zero out the grads for each parameter separately, like this:
#
# ::
# .. code-block:: python
#
# with torch.no_grad():
# weights -= weights.grad * lr
Expand All @@ -342,7 +342,7 @@ def forward(self, xb):
# and less prone to the error of forgetting some of our parameters, particularly
# if we had a more complicated model:
#
# ::
# .. code-block:: python
#
# with torch.no_grad():
# for p in model.parameters(): p -= p.grad * lr
Expand Down Expand Up @@ -418,15 +418,15 @@ def forward(self, xb):
#
# This will let us replace our previous manually coded optimization step:
#
# ::
# .. code-block:: python
#
# with torch.no_grad():
# for p in model.parameters(): p -= p.grad * lr
# model.zero_grad()
#
# and instead use just:
#
# ::
# .. code-block:: python
#
# opt.step()
# opt.zero_grad()
Expand Down Expand Up @@ -490,15 +490,15 @@ def get_model():
###############################################################################
# Previously, we had to iterate through minibatches of ``x`` and ``y`` values separately:
#
# ::
# .. code-block:: python
#
# xb = x_train[start_i:end_i]
# yb = y_train[start_i:end_i]
#
#
# Now, we can do these two steps together:
#
# ::
# .. code-block:: python
#
# xb,yb = train_ds[i*bs : i*bs+bs]
#
Expand Down Expand Up @@ -534,15 +534,15 @@ def get_model():
###############################################################################
# Previously, our loop iterated over batches ``(xb, yb)`` like this:
#
# ::
# .. code-block:: python
#
# for i in range((n-1)//bs + 1):
# xb,yb = train_ds[i*bs : i*bs+bs]
# pred = model(xb)
#
# Now, our loop is much cleaner, as ``(xb, yb)`` are loaded automatically from the data loader:
#
# ::
# .. code-block:: python
#
# for xb,yb in train_dl:
# pred = model(xb)
Expand Down
4 changes: 2 additions & 2 deletions beginner_source/profiler.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ def forward(self, input, mask):
# ``profiler.profile`` context manager. The ``with_stack=True`` parameter appends the
# file and line number of the operation in the trace.
#
# .. WARNING::
# .. warning::
# ``with_stack=True`` incurs an additional overhead, and is better suited for investigating code.
# Remember to remove it if you are benchmarking performance.
#
Expand Down Expand Up @@ -115,7 +115,7 @@ def forward(self, input, mask):
# `docs <https://pytorch.org/docs/stable/autograd.html#profiler>`__ for
# valid sorting keys).
#
# .. Note::
# .. note::
# When running profiler in a notebook, you might see entries like ``<ipython-input-18-193a910735e8>(13): forward``
# instead of filenames in the stacktrace. These correspond to ``<notebook-cell>(line number): calling-function``.

Expand Down
6 changes: 3 additions & 3 deletions beginner_source/saving_loading_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@
#
# **Output:**
#
# ::
# .. code-block:: sh
#
# Model's state_dict:
# conv1.weight torch.Size([6, 3, 5, 5])
Expand Down Expand Up @@ -175,15 +175,15 @@
# normalization layers to evaluation mode before running inference.
# Failing to do this will yield inconsistent inference results.
#
# .. Note ::
# .. note::
#
# Notice that the ``load_state_dict()`` function takes a dictionary
# object, NOT a path to a saved object. This means that you must
# deserialize the saved *state_dict* before you pass it to the
# ``load_state_dict()`` function. For example, you CANNOT load using
# ``model.load_state_dict(PATH)``.
#
# .. Note ::
# .. note::
#
# If you only plan to keep the best performing model (according to the
# acquired validation loss), don't forget that ``best_model_state = model.state_dict()``
Expand Down
8 changes: 4 additions & 4 deletions beginner_source/text_sentiment_ngrams_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
train_iter = iter(AG_NEWS(split="train"))

######################################################################
# ::
# .. code-block:: sh
#
# next(train_iter)
# >>> (3, "Fears for T N pension after talks Unions representing workers at Turner
Expand Down Expand Up @@ -88,7 +88,7 @@ def yield_tokens(data_iter):
######################################################################
# The vocabulary block converts a list of tokens into integers.
#
# ::
# .. code-block:: sh
#
# vocab(['here', 'is', 'an', 'example'])
# >>> [475, 21, 30, 5297]
Expand All @@ -102,7 +102,7 @@ def yield_tokens(data_iter):
######################################################################
# The text pipeline converts a text string into a list of integers based on the lookup table defined in the vocabulary. The label pipeline converts the label into integers. For example,
#
# ::
# .. code-block:: sh
#
# text_pipeline('here is the an example')
# >>> [475, 21, 2, 30, 5297]
Expand Down Expand Up @@ -188,7 +188,7 @@ def forward(self, text, offsets):
#
# The ``AG_NEWS`` dataset has four labels and therefore the number of classes is four.
#
# ::
# .. code-block:: sh
#
# 1 : World
# 2 : Sports
Expand Down
Loading