Skip to content

Commit 7bc042b

Browse files
authored
Merge branch 'main' into transformer
2 parents acb4f2c + 24c42d2 commit 7bc042b

File tree

1 file changed

+2
-3
lines changed

1 file changed

+2
-3
lines changed

intermediate_source/process_group_cpp_extension_tutorial.rst

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,8 @@ Basics
2525

2626
PyTorch collective communications power several widely adopted distributed
2727
training features, including
28-
`DistributedDataParallel <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html>`__,
29-
`ZeroRedundancyOptimizer <https://pytorch.org/docs/stable/distributed.optim.html#torch.distributed.optim.ZeroRedundancyOptimizer>`__,
30-
`FullyShardedDataParallel <https://github.com/pytorch/pytorch/blob/master/torch/distributed/_fsdp/fully_sharded_data_parallel.py>`__.
28+
`DistributedDataParallel <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html>`__ and
29+
`ZeroRedundancyOptimizer <https://pytorch.org/docs/stable/distributed.optim.html#torch.distributed.optim.ZeroRedundancyOptimizer>`__.
3130
In order to make the same collective communication API work with
3231
different communication backends, the distributed package abstracts collective
3332
communication operations into a

0 commit comments

Comments
 (0)