diff --git a/beginner_source/PyTorch Cheat.md b/beginner_source/PyTorch Cheat.md index 94fda9b2a5d..3101bd6a2d3 100644 --- a/beginner_source/PyTorch Cheat.md +++ b/beginner_source/PyTorch Cheat.md @@ -133,14 +133,14 @@ See [nn](https://pytorch.org/docs/stable/nn.html) ### Loss Functions ``` -nn.X where for example X is ... # BCELoss, CrossEntropyLoss, L1Loss, MSELoss, NLLLoss, SoftMarginLoss, MultiLabelSoftMarginLoss, CosineEmbeddingLoss, KLDivLoss, MarginRankingLoss, HingeEmbeddingLoss or CosineEmbeddingLoss +nn.X # where X is BCELoss, CrossEntropyLoss, L1Loss, MSELoss, NLLLoss, SoftMarginLoss, MultiLabelSoftMarginLoss, CosineEmbeddingLoss, KLDivLoss, MarginRankingLoss, HingeEmbeddingLoss or CosineEmbeddingLoss ``` See [loss functions](https://pytorch.org/docs/stable/nn.html#loss-functions) ### Activation Functions ``` -nn.X where for example X is ... # ReLU, ReLU6, ELU, SELU, PReLU, LeakyReLU, Threshold, HardTanh, Sigmoid, Tanh, LogSigmoid, Softplus, SoftShrink, Softsign, TanhShrink, Softmin, Softmax, Softmax2d or LogSoftmax +nn.X # where X is ReLU, ReLU6, ELU, SELU, PReLU, LeakyReLU, Threshold, HardTanh, Sigmoid, Tanh, LogSigmoid, Softplus, SoftShrink, Softsign, TanhShrink, Softmin, Softmax, Softmax2d or LogSoftmax ``` See [activation functions](https://pytorch.org/docs/stable/nn.html#non-linear-activations-weighted-sum-nonlinearity) @@ -149,7 +149,7 @@ See [activation functions](https://pytorch.org/docs/stable/nn.html#non-linear-ac ``` opt = optim.x(model.parameters(), ...) # create optimizer opt.step() # update weights -optim.X where for example X is ... # SGD, Adadelta, Adagrad, Adam, SparseAdam, Adamax, ASGD, LBFGS, RMSProp or Rprop +optim.X # where X is SGD, Adadelta, Adagrad, Adam, SparseAdam, Adamax, ASGD, LBFGS, RMSProp or Rprop ``` See [optimizers](https://pytorch.org/docs/stable/optim.html) @@ -158,7 +158,7 @@ See [optimizers](https://pytorch.org/docs/stable/optim.html) ``` scheduler = optim.X(optimizer,...) # create lr scheduler scheduler.step() # update lr at start of epoch -optim.lr_scheduler.X where ... # LambdaLR, StepLR, MultiStepLR, ExponentialLR or ReduceLROnPLateau +optim.lr_scheduler.X # where X is LambdaLR, StepLR, MultiStepLR, ExponentialLR or ReduceLROnPLateau ``` See [learning rate scheduler](https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate) @@ -179,7 +179,7 @@ See [datasets](https://pytorch.org/docs/stable/data.html?highlight=dataset#torch ``` DataLoader(dataset, batch_size=1, ...) # loads data batches agnostic of structure of individual data points sampler.Sampler(dataset,...) # abstract class dealing with ways to sample from dataset -sampler.XSampler where ... # Sequential, Random, Subset, WeightedRandom or Distributed +sampler.XSampler # where X is Sequential, Random, Subset, WeightedRandom or Distributed ``` See [dataloader](https://pytorch.org/docs/stable/data.html?highlight=dataloader#torch.utils.data.DataLoader) diff --git a/beginner_source/ptcheat.rst b/beginner_source/ptcheat.rst index ad1cacefc4f..fc30ec83935 100644 --- a/beginner_source/ptcheat.rst +++ b/beginner_source/ptcheat.rst @@ -189,7 +189,7 @@ Loss Functions .. code-block:: python - nn.X where for example X is ... # BCELoss, CrossEntropyLoss, + nn.X # where X is BCELoss, CrossEntropyLoss, # L1Loss, MSELoss, NLLLoss, SoftMarginLoss, # MultiLabelSoftMarginLoss, CosineEmbeddingLoss, # KLDivLoss, MarginRankingLoss, HingeEmbeddingLoss @@ -203,7 +203,7 @@ Activation Functions .. code-block:: python - nn.X where for example X is ... # ReLU, ReLU6, ELU, SELU, PReLU, LeakyReLU, + nn.X # where X is ReLU, ReLU6, ELU, SELU, PReLU, LeakyReLU, # Threshold, HardTanh, Sigmoid, Tanh, # LogSigmoid, Softplus, SoftShrink, # Softsign, TanhShrink, Softmin, Softmax, @@ -219,7 +219,7 @@ Optimizers opt = optim.x(model.parameters(), ...) # create optimizer opt.step() # update weights - optim.X where for example X is ... # SGD, Adadelta, Adagrad, Adam, + optim.X # where X is SGD, Adadelta, Adagrad, Adam, # SparseAdam, Adamax, ASGD, # LBFGS, RMSProp or Rprop @@ -232,8 +232,8 @@ Learning rate scheduling scheduler = optim.X(optimizer,...) # create lr scheduler scheduler.step() # update lr at start of epoch - optim.lr_scheduler.X where ... # LambdaLR, StepLR, MultiStepLR, - # ExponentialLR or ReduceLROnPLateau + optim.lr_scheduler.X # where X is LambdaLR, StepLR, MultiStepLR, + # ExponentialLR or ReduceLROnPLateau See `learning rate scheduler `__