You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: index.rst
+28-28Lines changed: 28 additions & 28 deletions
Original file line number
Diff line number
Diff line change
@@ -9,13 +9,13 @@ Welcome to PyTorch Tutorials
9
9
.. Add callout items below this line
10
10
11
11
.. customcalloutitem::
12
-
:description: The 60 min blitz is the most common starting point and provides a broad view on how to use PyTorch. It covers the basics all to the way constructing deep neural networks.
12
+
:description: The 60 min blitz is the most common starting point and provides a broad view on how to use PyTorch. It covers the basics all to the way constructing deep neural networks.
:header: NLP from Scratch: Classifying Names with a Character-level RNN
131
-
:card_description: Build and train a basic character-level RNN to classify word from scratch without the use of torchtext. First in a series of three tutorials.
131
+
:card_description: Build and train a basic character-level RNN to classify word from scratch without the use of torchtext. First in a series of three tutorials.
:header: NLP from Scratch: Generating Names with a Character-level RNN
138
-
:card_description: After using character-level RNN to classify names, leanr how to generate names from languages. Second in a series of three tutorials.
138
+
:card_description: After using character-level RNN to classify names, leanr how to generate names from languages. Second in a series of three tutorials.
:header: NLP from Scratch: Translation with a Sequence-to-sequence Network and Attention
144
+
:header: NLP from Scratch: Translation with a Sequence-to-sequence Network and Attention
145
145
:card_description: This is the third and final tutorial on doing “NLP From Scratch”, where we write our own classes and functions to preprocess the data to do our NLP modeling tasks.
:card_description: Use torchtext to reprocess data from a well-known datasets containing both English and German. Then use it to train a sequence-to-sequence model.
159
+
:card_description: Use torchtext to reprocess data from a well-known datasets containing both English and German. Then use it to train a sequence-to-sequence model.
:card_description: Create a neural network layer with no parameters using numpy. Then use scipy to create a neural network layer that has learnable weights.
228
+
:card_description: Create a neural network layer with no parameters using numpy. Then use scipy to create a neural network layer that has learnable weights.
:header: Extending TorchScript with Custom C++ Classes
242
242
:card_description: This is a continuation of the custom operator tutorial, and introduces the API we’ve built for binding C++ classes into TorchScript and Python simultaneously.
:card_description: The autograd package helps build flexible and dynamic nerural netorks. In this tutorial, exploreseveral examples of doing autograd in PyTorch C++ frontend
@@ -262,28 +262,28 @@ Welcome to PyTorch Tutorials
262
262
263
263
.. customcarditem::
264
264
:header: (experimental) Dynamic Quantization on an LSTM Word Language Model
265
-
:card_description: Apply dynamic quantization, the easiest form of quantization, to a LSTM-based next word prediction model.
265
+
:card_description: Apply dynamic quantization, the easiest form of quantization, to a LSTM-based next word prediction model.
@@ -292,14 +292,14 @@ Welcome to PyTorch Tutorials
292
292
293
293
.. customcarditem::
294
294
:header: Single-Machine Model Parallel Best Practices
295
-
:card_description: Learn how to implement model parallel, a distributed training technique which splits a single model onto different GPUs, rather than replicating the entire model on each GPU
295
+
:card_description: Learn how to implement model parallel, a distributed training technique which splits a single model onto different GPUs, rather than replicating the entire model on each GPU
0 commit comments