Skip to content

[small] fix link #2906

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 4, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions intermediate_source/process_group_cpp_extension_tutorial.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Customize Process Group Backends Using Cpp Extensions
=====================================================

**Author**: `Howard Huang <https://github.com/H-Huang>`, `Feng Tian <https://github.com/ftian1>`__, `Shen Li <https://mrshenli.github.io/>`__, `Min Si <https://minsii.github.io/>`__
**Author**: `Howard Huang <https://github.com/H-Huang>`__, `Feng Tian <https://github.com/ftian1>`__, `Shen Li <https://mrshenli.github.io/>`__, `Min Si <https://minsii.github.io/>`__

.. note::
|edit| View and edit this tutorial in `github <https://github.com/pytorch/tutorials/blob/main/intermediate_source/process_group_cpp_extension_tutorial.rst>`__.
Expand Down Expand Up @@ -100,7 +100,7 @@ repository for the full implementation.
// The collective communication APIs without a custom implementation
// will error out if invoked by application code.
};

class WorkDummy : public Work {
public:
WorkDummy(
Expand Down Expand Up @@ -266,8 +266,8 @@ After installation, you can conveniently use the ``dummy`` backend when calling
`init_process_group <https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group>`__
as if it is an builtin backend.

We can specify dispatching based on backend by changing the ``backend`` argument of ``init_process_group``. We
can dispatch collective with CPU tensor to ``gloo`` backend and dispatch collective with CUDA tensor to ``dummy`` backend by
We can specify dispatching based on backend by changing the ``backend`` argument of ``init_process_group``. We
can dispatch collective with CPU tensor to ``gloo`` backend and dispatch collective with CUDA tensor to ``dummy`` backend by
specifying ``cpu:gloo,cuda:dummy`` as the backend argument.

To send all tensors to ``dummy`` backend, we can simply specify ``dummy`` as the backend argument.
Expand Down
Loading