We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 1263f06 commit 525a0b1Copy full SHA for 525a0b1
intermediate_source/dist_tuto.rst
@@ -252,7 +252,7 @@ PyTorch. Here are a few supported collectives.
252
from all processes to ``tensor_list``, on all processes.
253
- ``dist.barrier(group)``: Blocks all processes in `group` until each one has entered this function.
254
- ``dist.all_to_all(output_tensor_list, input_tensor_list, group)``: Scatters list of input tensors to all processes in
255
-a group and return gathered list of tensors in output list.
+ a group and return gathered list of tensors in output list.
256
257
The full list of supported collectives can be found by looking at the latest documentation for PyTorch Distributed
258
`(link) <https://pytorch.org/docs/stable/distributed.html>`__.
0 commit comments