From 1186954d1d9ddc61c8466009a9587429e15c9e63 Mon Sep 17 00:00:00 2001 From: sekyondaMeta <127536312+sekyondaMeta@users.noreply.github.com> Date: Tue, 12 Nov 2024 14:16:12 -0500 Subject: [PATCH] Update process_group_cpp_extension_tutorial.rst Removing mention of the Fully_Shared_Data_Parallel as it is a dead link --- intermediate_source/process_group_cpp_extension_tutorial.rst | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/intermediate_source/process_group_cpp_extension_tutorial.rst b/intermediate_source/process_group_cpp_extension_tutorial.rst index 47379bf8818..3c72a9e319b 100644 --- a/intermediate_source/process_group_cpp_extension_tutorial.rst +++ b/intermediate_source/process_group_cpp_extension_tutorial.rst @@ -25,9 +25,8 @@ Basics PyTorch collective communications power several widely adopted distributed training features, including -`DistributedDataParallel `__, -`ZeroRedundancyOptimizer `__, -`FullyShardedDataParallel `__. +`DistributedDataParallel `__ and +`ZeroRedundancyOptimizer `__. In order to make the same collective communication API work with different communication backends, the distributed package abstracts collective communication operations into a