Skip to content

Pyspelling: Python intermediate tutorials A-M #2287

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Apr 19, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions .pyspelling.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,21 @@ matrix:
- name: python
sources:
- beginner_source/*.py
- intermediate_source/autograd_saved_tensors_hooks_tutorial.py
- intermediate_source/ax_multiobjective_nas_tutorial.py
- intermediate_source/char_rnn_classification_tutorial.py
- intermediate_source/char_rnn_generation_tutorial.py
- intermediate_source/custom_function_conv_bn_tutorial.py
- intermediate_source/ensembling.py
#- intermediate_source/flask_rest_api_tutorial.py
- intermediate_source/forward_ad_usage.py
- intermediate_source/fx_conv_bn_fuser.py
- intermediate_source/fx_profiling_tutorial.py
- intermediate_source/jacobians_hessians.py
- intermediate_source/mario_rl_tutorial.py
- intermediate_source/mnist_train_nas.py
- intermediate_source/memory_format_tutorial.py
- intermediate_source/model_parallel_tutorial.py
dictionary:
wordlists:
- en-wordlist.txt
Expand Down
80 changes: 80 additions & 0 deletions en-wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ APIs
Args
Autograd
BCE
BN
BOS
Bahdanau
BatchNorm
Expand All @@ -10,76 +11,112 @@ CIFAR
CLS
CNNDM
CNNs
CPUs
CUDA
Chatbots
Colab
Conv
ConvNet
ConvNets
DCGAN
DCGANs
DDQN
DNN
DataLoaders
DeiT
DenseNet
EOS
FC
FGSM
FLAVA
FX
FX's
FloydHub
FloydHub's
GAN
GANs
GPUs
GRU
GRUs
GeForce
Goodfellow
Goodfellow’s
GreedySearchDecoder
HVP
Hugging Face
IMDB
ImageNet
Initializations
Iteratively
JSON
JVP
Jacobian
Kiuk
Kubernetes
Kuei
LSTM
LSTMs
LeNet
LeakyReLU
LeakyReLUs
Lua
Luong
MLP
MLPs
MNIST
Mypy
NAS
NCHW
NES
NLP
NaN
NeurIPS
NumPy
Numericalization
Numpy's
OpenAI
Plotly
Prec
Profiler
PyTorch's
RGB
RL
RNN
RNNs
RTX
Radford
ReLU
ResNet
SST2
Sequentials
Sigmoid
SoTA
TPU
TensorBoard
TextVQA
Tokenization
TorchMultimodal
TorchScript
TorchX
Tunable
Unescape
VQA
Wikitext
Xeon
accuracies
activations
adversarially
al
autodiff
autograd
backend
backends
backprop
backpropagate
backpropagated
backpropagates
backpropagation
batchnorm
batchnorm's
benchmarking
boolean
Expand All @@ -89,12 +126,15 @@ chatbot's
checkpointing
composable
concat
config
contrastive
conv
convolutional
cpu
csv
cuDNN
datafile
dataframe
dataloader
dataloaders
datapipes
Expand All @@ -105,26 +145,43 @@ deserialize
deserialized
dir
downsample
downsamples
embeddings
encodings
ensembling
eq
et
evaluateInput
extensibility
fastai
fbgemm
feedforward
finetune
finetuning
fp
functorch
fuser
grayscale
hardcode
helpdesk
helpdesks
hessian
hessians
hvp
hyperparameter
hyperparameters
imagenet
initializations
inlined
interpretable
io
iterable
iteratively
jacobian
jacobians
jit
jpg
kwargs
labelled
learnable
loadFilename
Expand All @@ -139,6 +196,7 @@ modularity
modularized
multimodal
multimodality
multiobjective
multithreaded
namespace
natively
Expand All @@ -153,56 +211,78 @@ overfitting
parallelizable
parallelization
perceptibility
pipelining
pointwise
precomputing
prepend
preprocess
preprocessing
prespecified
pretrained
prewritten
primals
profiler
profilers
pytorch
quantized
quantizing
queryable
randint
readably
reinitializes
relu
reproducibility
rescale
resnet
restride
rewinded
romanized
runnable
runtime
runtime
runtimes
scalable
softmax
src
stacktrace
stateful
storages
strided
subclasses
subclassing
subdirectories
submodule
subreddit
summarization
tanh
th
thresholding
timestep
timesteps
tokenization
tokenize
tokenizer
torchaudio
torchdata
torchscriptable
torchtext
torchtext's
torchvision
torchviz
traceback
tradeoff
tradeoffs
uncomment
uncommented
unfused
unimodal
unnormalized
unpickling
utils
vectorization
vectorize
vectorized
vhp
voc
walkthrough
warmstart
Expand Down
28 changes: 15 additions & 13 deletions intermediate_source/autograd_saved_tensors_hooks_tutorial.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""
Hooks for autograd saved tensors
=======================
================================

"""

Expand All @@ -13,8 +13,7 @@
# packing/unpacking process.
#
# This tutorial assumes you are familiar with how backpropagation works in
# theory. If not, read this first:
# https://colab.research.google.com/drive/1aWNdmYt7RcHMbUk-Xz2Cv5-cGFSWPXe0#scrollTo=AHcEJ6nXUb7W
# theory. If not, read `this <https://colab.research.google.com/drive/1aWNdmYt7RcHMbUk-Xz2Cv5-cGFSWPXe0#scrollTo=AHcEJ6nXUb7W>`_ first.
#


Expand Down Expand Up @@ -107,7 +106,7 @@ def f(x):

######################################################################
# In the example above, executing without grad would only have kept ``x``
# and ``y`` in the scope, But the graph additionnally stores ``f(x)`` and
# and ``y`` in the scope, But the graph additionally stores ``f(x)`` and
# ``f(f(x))``. Hence, running a forward pass during training will be more
# costly in memory usage than during evaluation (more precisely, when
# autograd is not required).
Expand Down Expand Up @@ -182,7 +181,7 @@ def unpack_hook(x):


######################################################################
# The ``pack_hook`` function will be called everytime an operation saves
# The ``pack_hook`` function will be called every time an operation saves
# a tensor for backward.
# The output of ``pack_hook`` is then stored in the computation graph
# instead of the original tensor.
Expand Down Expand Up @@ -218,8 +217,9 @@ def unpack_hook(x):
#

######################################################################
# **Returning and int**

# Returning an ``int``
# ^^^^^^^^^^^^^^^^^^^^
#
# Returning the index of a Python list
# Relatively harmless but with debatable usefulness

Expand All @@ -240,8 +240,9 @@ def unpack(x):
assert(x.grad.equal(2 * x))

######################################################################
# **Returning a tuple**

# Returning a tuple
# ^^^^^^^^^^^^^^^^^
#
# Returning some tensor and a function how to unpack it
# Quite unlikely to be useful in its current form

Expand All @@ -262,9 +263,10 @@ def unpack(packed):
assert(torch.allclose(x.grad, 2 * x))

######################################################################
# **Returning a str**

# Returning the __repr__ of the tensor
# Returning a ``str``
# ^^^^^^^^^^^^^^^^^^^
#
# Returning the ``__repr__ of`` the tensor
# Probably never do this

x = torch.randn(5, requires_grad=True)
Expand Down Expand Up @@ -337,7 +339,7 @@ def forward(self, x):


######################################################################
# In practice, on a A100 GPU, for a resnet-152 with batch size 256, this
# In practice, on a A100 GPU, for a ResNet-152 with batch size 256, this
# corresponds to a GPU memory usage reduction from 48GB to 5GB, at the
# cost of a 6x slowdown.
#
Expand Down
Loading