Skip to content

Commit f1e682e

Browse files
authored
Fix typo "asynchronizely" -> "asynchronously" (#1154)
1 parent cba6b85 commit f1e682e

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

intermediate_source/model_parallel_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -245,7 +245,7 @@ def plot(means, stds, labels, fig_name):
245245
# -----------------------------
246246
#
247247
# In the following experiments, we further divide each 120-image batch into
248-
# 20-image splits. As PyTorch launches CUDA operations asynchronizely, the
248+
# 20-image splits. As PyTorch launches CUDA operations asynchronously, the
249249
# implementation does not need to spawn multiple threads to achieve
250250
# concurrency.
251251

0 commit comments

Comments
 (0)