Skip to content

Commit ec487a2

Browse files
Merge branch 'master' into master
2 parents 7550e05 + dc5c41c commit ec487a2

File tree

11 files changed

+406
-9
lines changed

11 files changed

+406
-9
lines changed
Loading

advanced_source/cpp_export.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ If you need to exclude some methods in your ``nn.Module``
115115
because they use Python features that TorchScript doesn't support yet,
116116
you could annotate those with ``@torch.jit.ignore``
117117

118-
``my_module`` is an instance of
118+
``sm`` is an instance of
119119
``ScriptModule`` that is ready for serialization.
120120

121121
Step 2: Serializing Your Script Module to a File
@@ -132,7 +132,7 @@ on the module and pass it a filename::
132132
traced_script_module.save("traced_resnet_model.pt")
133133

134134
This will produce a ``traced_resnet_model.pt`` file in your working directory.
135-
If you also would like to serialize ``my_module``, call ``my_module.save("my_module_model.pt")``
135+
If you also would like to serialize ``sm``, call ``sm.save("my_module_model.pt")``
136136
We have now officially left the realm of Python and are ready to cross over to the sphere
137137
of C++.
138138

beginner_source/basics/optimization_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
Optimizing Model Parameters
1313
===========================
1414
15-
Now that we have a model and data it's time to train, validate and test our model by optimizing it's parameters on
15+
Now that we have a model and data it's time to train, validate and test our model by optimizing its parameters on
1616
our data. Training a model is an iterative process; in each iteration (called an *epoch*) the model makes a guess about the output, calculates
1717
the error in its guess (*loss*), collects the derivatives of the error with respect to its parameters (as we saw in
1818
the `previous section <autograd_tutorial.html>`_), and **optimizes** these parameters using gradient descent. For a more

beginner_source/blitz/neural_networks_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ def __init__(self):
5858
def forward(self, x):
5959
# Max pooling over a (2, 2) window
6060
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
61-
# If the size is a square you can only specify a single number
61+
# If the size is a square, you can specify with a single number
6262
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
6363
x = x.view(-1, self.num_flat_features(x))
6464
x = F.relu(self.fc1(x))

beginner_source/nlp/README.txt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,9 @@ Deep Learning for NLP with Pytorch
1414
https://pytorch.org/tutorials/beginner/nlp/word_embeddings_tutorial.html
1515

1616
4. sequence_models_tutorial.py
17-
Sequence Models and Long-Short Term Memory Networks
17+
Sequence Models and Long Short-Term Memory Networks
1818
https://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html
1919

2020
5. advanced_tutorial.py
2121
Advanced: Making Dynamic Decisions and the Bi-LSTM CRF
22-
https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
22+
https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html

beginner_source/nlp/sequence_models_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# -*- coding: utf-8 -*-
22
r"""
3-
Sequence Models and Long-Short Term Memory Networks
3+
Sequence Models and Long Short-Term Memory Networks
44
===================================================
55
66
At this point, we have seen various feed-forward networks. That is,

beginner_source/nlp/word_embeddings_tutorial.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -268,6 +268,8 @@ def forward(self, inputs):
268268
losses.append(total_loss)
269269
print(losses) # The loss decreased every iteration over the training data!
270270

271+
# To get the embedding of a particular word, e.g. "beauty"
272+
print(model.embeddings.weight[word_to_ix["beauty"]])
271273

272274
######################################################################
273275
# Exercise: Computing Word Embeddings: Continuous Bag-of-Words

index.rst

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -324,6 +324,13 @@ Welcome to PyTorch Tutorials
324324
:link: beginner/hyperparameter_tuning_tutorial.html
325325
:tags: Model-Optimization,Best-Practice
326326

327+
.. customcarditem::
328+
:header: Parametrizations Tutorial
329+
:card_description: Learn how to use torch.nn.utils.parametrize to put constriants on your parameters (e.g. make them orthogonal, symmetric positive definite, low-rank...)
330+
:image: _static/img/thumbnails/cropped/parametrizations.png
331+
:link: intermediate/parametrizations.html
332+
:tags: Model-Optimization,Best-Practice
333+
327334
.. customcarditem::
328335
:header: Pruning Tutorial
329336
:card_description: Learn how to use torch.nn.utils.prune to sparsify your neural networks, and how to extend it to implement your own custom pruning technique.
@@ -620,6 +627,7 @@ Additional Resources
620627

621628
beginner/profiler
622629
beginner/hyperparameter_tuning_tutorial
630+
intermediate/parametrizations
623631
intermediate/pruning_tutorial
624632
advanced/dynamic_quantization_tutorial
625633
intermediate/dynamic_quantization_bert_tutorial

0 commit comments

Comments
 (0)