From 036926b915d05c3c674934822478e63e4f1ad46a Mon Sep 17 00:00:00 2001 From: Antonin Stefanutti Date: Tue, 8 Oct 2024 11:39:07 +0200 Subject: [PATCH] DTensor has moved to the public namespace --- beginner_source/dist_overview.rst | 2 +- recipes_source/distributed_comm_debug_mode.rst | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/beginner_source/dist_overview.rst b/beginner_source/dist_overview.rst index 584a5aa273a..2c74bb51a04 100644 --- a/beginner_source/dist_overview.rst +++ b/beginner_source/dist_overview.rst @@ -35,7 +35,7 @@ Sharding primitives ``DTensor`` and ``DeviceMesh`` are primitives used to build parallelism in terms of sharded or replicated tensors on N-dimensional process groups. -- `DTensor `__ represents a tensor that is sharded and/or replicated, and communicates automatically to reshard tensors as needed by operations. +- `DTensor `__ represents a tensor that is sharded and/or replicated, and communicates automatically to reshard tensors as needed by operations. - `DeviceMesh `__ abstracts the accelerator device communicators into a multi-dimensional array, which manages the underlying ``ProcessGroup`` instances for collective communications in multi-dimensional parallelisms. Try out our `Device Mesh Recipe `__ to learn more. Communications APIs diff --git a/recipes_source/distributed_comm_debug_mode.rst b/recipes_source/distributed_comm_debug_mode.rst index db79cdc8992..dc1a6e3e565 100644 --- a/recipes_source/distributed_comm_debug_mode.rst +++ b/recipes_source/distributed_comm_debug_mode.rst @@ -21,7 +21,7 @@ of parallel strategies to scale up distributed training. However, the lack of in between existing solutions poses a significant challenge, primarily due to the absence of a unified abstraction that can bridge these different parallelism strategies. To address this issue, PyTorch has proposed `DistributedTensor(DTensor) -`_ +`_ which abstracts away the complexities of tensor communication in distributed training, providing a seamless user experience. However, when dealing with existing parallelism solutions and developing parallelism solutions using the unified abstraction like DTensor, the lack of transparency @@ -194,7 +194,7 @@ Below is the interactive module tree visualization that you can use to upload yo
- + @@ -207,4 +207,4 @@ JSON outputs in the embedded visual browser. For more detailed information about ``CommDebugMode``, see `comm_mode_features_example.py -`_ +`_