Skip to content

Commit 63a0105

Browse files
committed
[dtensor][debug] tutorial showing users how to use commdebugmode and giving access to visual browser
1 parent 2e3a21a commit 63a0105

File tree

2 files changed

+33
-4
lines changed

2 files changed

+33
-4
lines changed

distributed_comm_debug_mode.rst renamed to recipes_source/distributed_comm_debug_mode.rst

Lines changed: 25 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,16 @@ Prerequisites:
1212

1313
What is CommDebugMode and why is it useful
1414
------------------------------------------
15-
As the size of models continues to increase, users are seeking to leverage various combinations of parallel strategies to scale up distributed training. However, the lack of interoperability between existing solutions poses a significant challenge, primarily due to the absence of a unified abstraction that can bridge these different parallelism strategies. To address this issue, PyTorch has proposed DistributedTensor (DTensor)which abstracts away the complexities of tensor communication in distributed training, providing a seamless user experience. However, this abstraction creates a lack of transparency that can make it challenging for users to identify and resolve issues. To address this challenge, my internship project aims to develop and enhance CommDebugMode, a Python context manager that will serve as one of the primary debugging tools for DTensors. CommDebugMode is a python context manager that enables users to view when and why collective operations are happening when using DTensors, addressing this problem.
15+
As the size of models continues to increase, users are seeking to leverage various combinations
16+
of parallel strategies to scale up distributed training. However, the lack of interoperability
17+
between existing solutions poses a significant challenge, primarily due to the absence of a
18+
unified abstraction that can bridge these different parallelism strategies. To address this
19+
issue, PyTorch has proposed DistributedTensor(DTensor) which abstracts away the complexities of
20+
tensor communication in distributed training, providing a seamless user experience. However,
21+
this abstraction creates a lack of transparency that can make it challenging for users to
22+
identify and resolve issues. To address this challenge, CommDebugMode, a Python context manager
23+
will serve as one of the primary debugging tools for DTensors, enabling users to view when and
24+
why collective operations are happening when using DTensors, addressing this problem.
1625

1726

1827
How to use CommDebugMode
@@ -53,15 +62,19 @@ Using CommDebugMode and getting its output is very simple.
5362
*c10d_functional.all_reduce: 1
5463
"""
5564
56-
All users have to do is wrap the code running the model in CommDebugMode and call the API that they want to use to display the data. One important thing to note
57-
is that the users can use a noise_level arguement to control how much information is displayed to the user. You can see what each noise_level will display to the user.
65+
All users have to do is wrap the code running the model in CommDebugMode and call the API that
66+
they want to use to display the data. One important thing to note is that the users can use a noise_level
67+
arguement to control how much information is displayed to the user. The information below shows what each
68+
noise level displays
5869

5970
| 0. prints module-level collective counts
6071
| 1. prints dTensor operations not included in trivial operations, module information
6172
| 2. prints operations not included in trivial operations
6273
| 3. prints all operations
6374
64-
In the example above, users can see in the first picture that the collective operation, all_reduce, occurs once in the forward pass of the MLPModule. The second picture provides a greater level of detail, allowing users to pinpoint that the all-reduce operation happens in the second linear layer of the MLPModule.
75+
In the example above, users can see in the first picture that the collective operation, all_reduce, occurs
76+
once in the forward pass of the MLPModule. The second picture provides a greater level of detail, allowing
77+
users to pinpoint that the all-reduce operation happens in the second linear layer of the MLPModule.
6578

6679

6780
Below is the interactive module tree visualization that users can upload their JSON dump to:
@@ -174,3 +187,11 @@ Below is the interactive module tree visualization that users can upload their J
174187
<script src="https://cdn.jsdelivr.net/gh/pytorch/pytorch@main/torch/distributed/_tensor/debug/comm_mode_broswer_visual.js"></script>
175188
</body>
176189
</html>
190+
191+
Conclusion
192+
------------------------------------------
193+
In conclusion, we have learned how to use CommDebugMode in order to debug Distributed Tensors
194+
and can use future json dumps in the embedded visual browser.
195+
196+
For more detailed information about CommDebugMode, please see
197+
https://github.com/pytorch/pytorch/blob/main/torch/distributed/_tensor/examples/comm_mode_features_example.py

recipes_source/recipes_index.rst

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -395,6 +395,13 @@ Recipes are bite-sized, actionable examples of how to use specific PyTorch featu
395395
:link: ../recipes/distributed_async_checkpoint_recipe.html
396396
:tags: Distributed-Training
397397

398+
.. customcarditem::
399+
:header: Getting Started with CommDebugMode
400+
:card_description: Learn how to use CommDebugMode for DTensors
401+
:image: ../_static/img/thumbnails/cropped/generic-pytorch-logo.png
402+
:link: ../recipes/distributed_comm_debug_mode.html
403+
:tags: Distributed-Training
404+
398405
.. TorchServe
399406
400407
.. customcarditem::
@@ -449,3 +456,4 @@ Recipes are bite-sized, actionable examples of how to use specific PyTorch featu
449456
/recipes/cuda_rpc
450457
/recipes/distributed_optim_torchscript
451458
/recipes/mobile_interpreter
459+
/recipes/distributed_comm_debug_mode

0 commit comments

Comments
 (0)