Skip to content

Commit e7191e8

Browse files
committed
forgot params isn't a real word
1 parent 3762fd2 commit e7191e8

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

intermediate_source/optimizer_step_in_backward_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ def train(model, optimizer):
113113
# where is the peak memory?
114114
#
115115
# The peak memory usage is during the optimizer step! Note the memory then
116-
# consists of ~1.2GB of params, ~1.2GB of gradients, and ~2.4GB=2*1.2GB of
116+
# consists of ~1.2GB of parameters, ~1.2GB of gradients, and ~2.4GB=2*1.2GB of
117117
# the optimizer state as expected. The last ~1.2GB comes from Adam optimizer
118118
# requiring memory for intermediates, totaling to ~6GB of peak memory.
119119
# Technically, you can remove the need for the last 1.2GB for optimizer

0 commit comments

Comments
 (0)