Skip to content

Commit 718fc2f

Browse files
authored
Merge branch 'main' into resolve-2332
2 parents 0eaf993 + 56a2faf commit 718fc2f

File tree

3 files changed

+20
-17
lines changed

3 files changed

+20
-17
lines changed

beginner_source/former_torchies/parallelism_tutorial.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,10 @@ def forward(self, x):
5353

5454
class MyDataParallel(nn.DataParallel):
5555
def __getattr__(self, name):
56-
return getattr(self.module, name)
56+
try:
57+
return super().__getattr__(name)
58+
except AttributeError:
59+
return getattr(self.module, name)
5760

5861
########################################################################
5962
# **Primitives on which DataParallel is implemented upon:**

beginner_source/nn_tutorial.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -795,8 +795,7 @@ def __len__(self):
795795
return len(self.dl)
796796

797797
def __iter__(self):
798-
batches = iter(self.dl)
799-
for b in batches:
798+
for b in self.dl:
800799
yield (self.func(*b))
801800

802801
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)

intermediate_source/char_rnn_classification_tutorial.py

Lines changed: 15 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,14 @@
44
**************************************************************
55
**Author**: `Sean Robertson <https://github.com/spro>`_
66
7-
We will be building and training a basic character-level RNN to classify
8-
words. This tutorial, along with the following two, show how to do
9-
preprocess data for NLP modeling "from scratch", in particular not using
10-
many of the convenience functions of `torchtext`, so you can see how
11-
preprocessing for NLP modeling works at a low level.
7+
We will be building and training a basic character-level Recurrent Neural
8+
Network (RNN) to classify words. This tutorial, along with two other
9+
Natural Language Processing (NLP) "from scratch" tutorials
10+
:doc:`/intermediate/char_rnn_generation_tutorial` and
11+
:doc:`/intermediate/seq2seq_translation_tutorial`, show how to
12+
preprocess data to model NLP. In particular these tutorials do not
13+
use many of the convenience functions of `torchtext`, so you can see how
14+
preprocessing to model NLP works at a low level.
1215
1316
A character-level RNN reads words as a series of characters -
1417
outputting a prediction and "hidden state" at each step, feeding its
@@ -32,13 +35,15 @@
3235
(-2.68) Dutch
3336
3437
35-
**Recommended Reading:**
38+
Recommended Preparation
39+
=======================
3640
37-
I assume you have at least installed PyTorch, know Python, and
38-
understand Tensors:
41+
Before starting this tutorial it is recommended that you have installed PyTorch,
42+
and have a basic understanding of Python programming language and Tensors:
3943
4044
- https://pytorch.org/ For installation instructions
4145
- :doc:`/beginner/deep_learning_60min_blitz` to get started with PyTorch in general
46+
and learn the basics of Tensors
4247
- :doc:`/beginner/pytorch_with_examples` for a wide and deep overview
4348
- :doc:`/beginner/former_torchies_tutorial` if you are former Lua Torch user
4449
@@ -181,10 +186,6 @@ def lineToTensor(line):
181186
# is just 2 linear layers which operate on an input and hidden state, with
182187
# a ``LogSoftmax`` layer after the output.
183188
#
184-
# .. figure:: https://i.imgur.com/Z2xbySO.png
185-
# :alt:
186-
#
187-
#
188189

189190
import torch.nn as nn
190191

@@ -195,13 +196,13 @@ def __init__(self, input_size, hidden_size, output_size):
195196
self.hidden_size = hidden_size
196197

197198
self.i2h = nn.Linear(input_size + hidden_size, hidden_size)
198-
self.i2o = nn.Linear(input_size + hidden_size, output_size)
199+
self.h2o = nn.Linear(hidden_size, output_size)
199200
self.softmax = nn.LogSoftmax(dim=1)
200201

201202
def forward(self, input, hidden):
202203
combined = torch.cat((input, hidden), 1)
203204
hidden = self.i2h(combined)
204-
output = self.i2o(combined)
205+
output = self.h2o(hidden)
205206
output = self.softmax(output)
206207
return output, hidden
207208

0 commit comments

Comments
 (0)