From 525a0b1f81123c3685fc498f74dfa34bafa4818d Mon Sep 17 00:00:00 2001 From: Stas Bekman Date: Mon, 9 Dec 2024 12:43:36 -0800 Subject: [PATCH] fix wrapping --- intermediate_source/dist_tuto.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/intermediate_source/dist_tuto.rst b/intermediate_source/dist_tuto.rst index c5ffc317c43..25eb372e5ef 100644 --- a/intermediate_source/dist_tuto.rst +++ b/intermediate_source/dist_tuto.rst @@ -252,7 +252,7 @@ PyTorch. Here are a few supported collectives. from all processes to ``tensor_list``, on all processes. - ``dist.barrier(group)``: Blocks all processes in `group` until each one has entered this function. - ``dist.all_to_all(output_tensor_list, input_tensor_list, group)``: Scatters list of input tensors to all processes in -a group and return gathered list of tensors in output list. + a group and return gathered list of tensors in output list. The full list of supported collectives can be found by looking at the latest documentation for PyTorch Distributed `(link) `__.