Skip to content

Commit e37d8dd

Browse files
polplopsoumith
authored andcommitted
Changed placing of for wherex is... to comment (#434)
1 parent d6aa1f0 commit e37d8dd

File tree

2 files changed

+10
-10
lines changed

2 files changed

+10
-10
lines changed

beginner_source/PyTorch Cheat.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -133,14 +133,14 @@ See [nn](https://pytorch.org/docs/stable/nn.html)
133133
### Loss Functions
134134

135135
```
136-
nn.X where for example X is ... # BCELoss, CrossEntropyLoss, L1Loss, MSELoss, NLLLoss, SoftMarginLoss, MultiLabelSoftMarginLoss, CosineEmbeddingLoss, KLDivLoss, MarginRankingLoss, HingeEmbeddingLoss or CosineEmbeddingLoss
136+
nn.X # where X is BCELoss, CrossEntropyLoss, L1Loss, MSELoss, NLLLoss, SoftMarginLoss, MultiLabelSoftMarginLoss, CosineEmbeddingLoss, KLDivLoss, MarginRankingLoss, HingeEmbeddingLoss or CosineEmbeddingLoss
137137
```
138138
See [loss functions](https://pytorch.org/docs/stable/nn.html#loss-functions)
139139

140140
### Activation Functions
141141

142142
```
143-
nn.X where for example X is ... # ReLU, ReLU6, ELU, SELU, PReLU, LeakyReLU, Threshold, HardTanh, Sigmoid, Tanh, LogSigmoid, Softplus, SoftShrink, Softsign, TanhShrink, Softmin, Softmax, Softmax2d or LogSoftmax
143+
nn.X # where X is ReLU, ReLU6, ELU, SELU, PReLU, LeakyReLU, Threshold, HardTanh, Sigmoid, Tanh, LogSigmoid, Softplus, SoftShrink, Softsign, TanhShrink, Softmin, Softmax, Softmax2d or LogSoftmax
144144
```
145145
See [activation functions](https://pytorch.org/docs/stable/nn.html#non-linear-activations-weighted-sum-nonlinearity)
146146

@@ -149,7 +149,7 @@ See [activation functions](https://pytorch.org/docs/stable/nn.html#non-linear-ac
149149
```
150150
opt = optim.x(model.parameters(), ...) # create optimizer
151151
opt.step() # update weights
152-
optim.X where for example X is ... # SGD, Adadelta, Adagrad, Adam, SparseAdam, Adamax, ASGD, LBFGS, RMSProp or Rprop
152+
optim.X # where X is SGD, Adadelta, Adagrad, Adam, SparseAdam, Adamax, ASGD, LBFGS, RMSProp or Rprop
153153
```
154154
See [optimizers](https://pytorch.org/docs/stable/optim.html)
155155

@@ -158,7 +158,7 @@ See [optimizers](https://pytorch.org/docs/stable/optim.html)
158158
```
159159
scheduler = optim.X(optimizer,...) # create lr scheduler
160160
scheduler.step() # update lr at start of epoch
161-
optim.lr_scheduler.X where ... # LambdaLR, StepLR, MultiStepLR, ExponentialLR or ReduceLROnPLateau
161+
optim.lr_scheduler.X # where X is LambdaLR, StepLR, MultiStepLR, ExponentialLR or ReduceLROnPLateau
162162
```
163163
See [learning rate scheduler](https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate)
164164

@@ -179,7 +179,7 @@ See [datasets](https://pytorch.org/docs/stable/data.html?highlight=dataset#torch
179179
```
180180
DataLoader(dataset, batch_size=1, ...) # loads data batches agnostic of structure of individual data points
181181
sampler.Sampler(dataset,...) # abstract class dealing with ways to sample from dataset
182-
sampler.XSampler where ... # Sequential, Random, Subset, WeightedRandom or Distributed
182+
sampler.XSampler # where X is Sequential, Random, Subset, WeightedRandom or Distributed
183183
```
184184
See [dataloader](https://pytorch.org/docs/stable/data.html?highlight=dataloader#torch.utils.data.DataLoader)
185185

beginner_source/ptcheat.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -189,7 +189,7 @@ Loss Functions
189189

190190
.. code-block:: python
191191
192-
nn.X where for example X is ... # BCELoss, CrossEntropyLoss,
192+
nn.X # where X is BCELoss, CrossEntropyLoss,
193193
# L1Loss, MSELoss, NLLLoss, SoftMarginLoss,
194194
# MultiLabelSoftMarginLoss, CosineEmbeddingLoss,
195195
# KLDivLoss, MarginRankingLoss, HingeEmbeddingLoss
@@ -203,7 +203,7 @@ Activation Functions
203203

204204
.. code-block:: python
205205
206-
nn.X where for example X is ... # ReLU, ReLU6, ELU, SELU, PReLU, LeakyReLU,
206+
nn.X # where X is ReLU, ReLU6, ELU, SELU, PReLU, LeakyReLU,
207207
# Threshold, HardTanh, Sigmoid, Tanh,
208208
# LogSigmoid, Softplus, SoftShrink,
209209
# Softsign, TanhShrink, Softmin, Softmax,
@@ -219,7 +219,7 @@ Optimizers
219219
220220
opt = optim.x(model.parameters(), ...) # create optimizer
221221
opt.step() # update weights
222-
optim.X where for example X is ... # SGD, Adadelta, Adagrad, Adam,
222+
optim.X # where X is SGD, Adadelta, Adagrad, Adam,
223223
# SparseAdam, Adamax, ASGD,
224224
# LBFGS, RMSProp or Rprop
225225
@@ -232,8 +232,8 @@ Learning rate scheduling
232232
233233
scheduler = optim.X(optimizer,...) # create lr scheduler
234234
scheduler.step() # update lr at start of epoch
235-
optim.lr_scheduler.X where ... # LambdaLR, StepLR, MultiStepLR,
236-
# ExponentialLR or ReduceLROnPLateau
235+
optim.lr_scheduler.X # where X is LambdaLR, StepLR, MultiStepLR,
236+
# ExponentialLR or ReduceLROnPLateau
237237
238238
See `learning rate
239239
scheduler <https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate>`__

0 commit comments

Comments
 (0)