Skip to content

Commit d1b279f

Browse files
authored
Merge pull request #392 from 9bow/fix_typo_in_cheat_sheet
fix typo in cheat sheet (#381)
2 parents 0fa8074 + 4703db1 commit d1b279f

File tree

1 file changed

+4
-16
lines changed

1 file changed

+4
-16
lines changed

beginner_source/PyTorch Cheat.md

Lines changed: 4 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Date updated: 7/30/18
1111

1212
```
1313
import torch # root package
14-
from torch.utils.data import Dataset, Dataloader # dataset representation and loading
14+
from torch.utils.data import Dataset, DataLoader # dataset representation and loading
1515
```
1616

1717
### Neural Network API
@@ -39,30 +39,25 @@ See [hybrid frontend](https://pytorch.org/docs/stable/hybridfrontend)
3939
```
4040
torch.onnx.export(model, dummy data, xxxx.proto) # exports an ONNX formatted model using a trained model, dummy data and the desired file name
4141
model = onnx.load("alexnet.proto") # load an ONNX model
42-
onnx.checker.check_model(model) # check that the model IR is well formed
42+
onnx.checker.check_model(model) # check that the model IR is well formed
4343
onnx.helper.printable_graph(model.graph) # print a human readable representation of the graph
4444
```
4545
See [onnx](https://pytorch.org/docs/stable/onnx.html)
4646

47-
48-
4947
### Vision
5048

5149
```
5250
from torchvision import datasets, models, transforms # vision datasets, architectures & transforms
5351
import torchvision.transforms as transforms # composable transforms
5452
```
55-
5653
See [torchvision](https://pytorch.org/docs/stable/torchvision/index.html)
5754

5855
### Distributed Training
5956

6057
```
6158
import torch.distributed as dist # distributed communication
6259
from multiprocessing import Process # memory sharing processes
63-
6460
```
65-
6661
See [distributed](https://pytorch.org/docs/stable/distributed.html) and [multiprocessing](https://pytorch.org/docs/stable/multiprocessing.html)
6762

6863

@@ -79,11 +74,8 @@ x.clone() # clone of x
7974
with torch.no_grad(): # code wrap that stops autograd from tracking tensor history
8075
requires_grad=True # arg, when set to True, tracks computation history for future derivative calculations
8176
```
82-
8377
See [tensor](https://pytorch.org/docs/stable/tensors.html)
8478

85-
86-
8779
### Dimensionality
8880

8981
```
@@ -121,7 +113,6 @@ else: #
121113
122114
net.to(device) # recursively convert their parameters and buffers to device specific tensors
123115
mytensor.to(device) # copy your tensors to a device (gpu, cpu)
124-
125116
```
126117
See [cuda](https://pytorch.org/docs/stable/cuda.html)
127118

@@ -136,11 +127,9 @@ nn.RNN/LSTM/GRU # recurrent layers
136127
nn.Dropout(p=0.5, inplace=False) # dropout layer for any dimensional input
137128
nn.Dropout2d(p=0.5, inplace=False) # 2-dimensional channel-wise dropout
138129
nn.Embedding(num_embeddings, embedding_dim) # (tensor-wise) mapping from indices to embedding vectors
139-
140130
```
141131
See [nn](https://pytorch.org/docs/stable/nn.html)
142132

143-
144133
### Loss Functions
145134

146135
```
@@ -170,18 +159,18 @@ See [optimizers](https://pytorch.org/docs/stable/optim.html)
170159
scheduler = optim.X(optimizer,...) # create lr scheduler
171160
scheduler.step() # update lr at start of epoch
172161
optim.lr_scheduler.X where ... # LambdaLR, StepLR, MultiStepLR, ExponentialLR or ReduceLROnPLateau
173-
174162
```
175163
See [learning rate scheduler](https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate)
176164

165+
177166
# Data Utilities
178167

179168
### Datasets
180169

181170
```
182171
Dataset # abstract class representing dataset
183172
TensorDataset # labelled dataset in the form of tensors
184-
Concat Dataset # concatenation of Datasets
173+
ConcatDataset # concatenation of Datasets
185174
```
186175
See [datasets](https://pytorch.org/docs/stable/data.html?highlight=dataset#torch.utils.data.Dataset)
187176

@@ -191,7 +180,6 @@ See [datasets](https://pytorch.org/docs/stable/data.html?highlight=dataset#torch
191180
DataLoader(dataset, batch_size=1, ...) # loads data batches agnostic of structure of individual data points
192181
sampler.Sampler(dataset,...) # abstract class dealing with ways to sample from dataset
193182
sampler.XSampler where ... # Sequential, Random, Subset, WeightedRandom or Distributed
194-
195183
```
196184
See [dataloader](https://pytorch.org/docs/stable/data.html?highlight=dataloader#torch.utils.data.DataLoader)
197185

0 commit comments

Comments
 (0)