Skip to content

Commit 24c42d2

Browse files
Fixed links in process_group_cpp_extension_tutorial.rst (#3157)
Removing mention of the Fully_Shared_Data_Parallel as it is a dead link Co-authored-by: Svetlana Karslioglu <svekars@meta.com>
1 parent 3d5b75a commit 24c42d2

File tree

1 file changed

+2
-3
lines changed

1 file changed

+2
-3
lines changed

intermediate_source/process_group_cpp_extension_tutorial.rst

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,8 @@ Basics
2525

2626
PyTorch collective communications power several widely adopted distributed
2727
training features, including
28-
`DistributedDataParallel <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html>`__,
29-
`ZeroRedundancyOptimizer <https://pytorch.org/docs/stable/distributed.optim.html#torch.distributed.optim.ZeroRedundancyOptimizer>`__,
30-
`FullyShardedDataParallel <https://github.com/pytorch/pytorch/blob/master/torch/distributed/_fsdp/fully_sharded_data_parallel.py>`__.
28+
`DistributedDataParallel <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html>`__ and
29+
`ZeroRedundancyOptimizer <https://pytorch.org/docs/stable/distributed.optim.html#torch.distributed.optim.ZeroRedundancyOptimizer>`__.
3130
In order to make the same collective communication API work with
3231
different communication backends, the distributed package abstracts collective
3332
communication operations into a

0 commit comments

Comments
 (0)