We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
2 parents a996498 + a91f631 commit 7cbcdb3Copy full SHA for 7cbcdb3
intermediate_source/dist_tuto.rst
@@ -252,7 +252,7 @@ PyTorch. Here are a few supported collectives.
252
from all processes to ``tensor_list``, on all processes.
253
- ``dist.barrier(group)``: Blocks all processes in `group` until each one has entered this function.
254
- ``dist.all_to_all(output_tensor_list, input_tensor_list, group)``: Scatters list of input tensors to all processes in
255
-a group and return gathered list of tensors in output list.
+ a group and return gathered list of tensors in output list.
256
257
The full list of supported collectives can be found by looking at the latest documentation for PyTorch Distributed
258
`(link) <https://pytorch.org/docs/stable/distributed.html>`__.
0 commit comments