You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: recipes_source/distributed_comm_debug_mode.rst
+25-4Lines changed: 25 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,16 @@ Prerequisites:
12
12
13
13
What is CommDebugMode and why is it useful
14
14
------------------------------------------
15
-
As the size of models continues to increase, users are seeking to leverage various combinations of parallel strategies to scale up distributed training. However, the lack of interoperability between existing solutions poses a significant challenge, primarily due to the absence of a unified abstraction that can bridge these different parallelism strategies. To address this issue, PyTorch has proposed DistributedTensor (DTensor)which abstracts away the complexities of tensor communication in distributed training, providing a seamless user experience. However, this abstraction creates a lack of transparency that can make it challenging for users to identify and resolve issues. To address this challenge, my internship project aims to develop and enhance CommDebugMode, a Python context manager that will serve as one of the primary debugging tools for DTensors. CommDebugMode is a python context manager that enables users to view when and why collective operations are happening when using DTensors, addressing this problem.
15
+
As the size of models continues to increase, users are seeking to leverage various combinations
16
+
of parallel strategies to scale up distributed training. However, the lack of interoperability
17
+
between existing solutions poses a significant challenge, primarily due to the absence of a
18
+
unified abstraction that can bridge these different parallelism strategies. To address this
19
+
issue, PyTorch has proposed DistributedTensor(DTensor) which abstracts away the complexities of
20
+
tensor communication in distributed training, providing a seamless user experience. However,
21
+
this abstraction creates a lack of transparency that can make it challenging for users to
22
+
identify and resolve issues. To address this challenge, CommDebugMode, a Python context manager
23
+
will serve as one of the primary debugging tools for DTensors, enabling users to view when and
24
+
why collective operations are happening when using DTensors, addressing this problem.
16
25
17
26
18
27
How to use CommDebugMode
@@ -53,15 +62,19 @@ Using CommDebugMode and getting its output is very simple.
53
62
*c10d_functional.all_reduce: 1
54
63
"""
55
64
56
-
All users have to do is wrap the code running the model in CommDebugMode and call the API that they want to use to display the data. One important thing to note
57
-
is that the users can use a noise_level arguement to control how much information is displayed to the user. You can see what each noise_level will display to the user.
65
+
All users have to do is wrap the code running the model in CommDebugMode and call the API that
66
+
they want to use to display the data. One important thing to note is that the users can use a noise_level
67
+
arguement to control how much information is displayed to the user. The information below shows what each
68
+
noise level displays
58
69
59
70
|0. prints module-level collective counts
60
71
|1. prints dTensor operations not included in trivial operations, module information
61
72
|2. prints operations not included in trivial operations
62
73
|3. prints all operations
63
74
64
-
In the example above, users can see in the first picture that the collective operation, all_reduce, occurs once in the forward pass of the MLPModule. The second picture provides a greater level of detail, allowing users to pinpoint that the all-reduce operation happens in the second linear layer of the MLPModule.
75
+
In the example above, users can see in the first picture that the collective operation, all_reduce, occurs
76
+
once in the forward pass of the MLPModule. The second picture provides a greater level of detail, allowing
77
+
users to pinpoint that the all-reduce operation happens in the second linear layer of the MLPModule.
65
78
66
79
67
80
Below is the interactive module tree visualization that users can upload their JSON dump to:
@@ -174,3 +187,11 @@ Below is the interactive module tree visualization that users can upload their J
0 commit comments