Skip to content

Commit 4c48624

Browse files
mrshenliShen Liholly1238
authored
Update links in RPC tutorial (#946)
Co-authored-by: Shen Li <shenli@devfair017.maas> Co-authored-by: holly1238 <77758406+holly1238@users.noreply.github.com>
1 parent e478586 commit 4c48624

File tree

1 file changed

+13
-13
lines changed

1 file changed

+13
-13
lines changed

intermediate_source/rpc_tutorial.rst

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,8 @@ Prerequisites:
99
- `RPC API documents <https://pytorch.org/docs/master/rpc.html>`__
1010

1111
This tutorial uses two simple examples to demonstrate how to build distributed
12-
training with the `torch.distributed.rpc <https://pytorch.org/docs/master/rpc.html>`__
13-
package which is first introduced as a prototype feature in PyTorch v1.4.
12+
training with the `torch.distributed.rpc <https://pytorch.org/docs/stable/rpc.html>`__
13+
package which was first introduced as an experimental feature in PyTorch v1.4.
1414
Source code of the two examples can be found in
1515
`PyTorch examples <https://github.com/pytorch/examples>`__.
1616

@@ -36,19 +36,19 @@ paradigms. For example:
3636
machines.
3737

3838

39-
The `torch.distributed.rpc <https://pytorch.org/docs/master/rpc.html>`__ package
40-
can help with the above scenarios. In case 1, `RPC <https://pytorch.org/docs/master/rpc.html#rpc>`__
41-
and `RRef <https://pytorch.org/docs/master/rpc.html#rref>`__ allow sending data
39+
The `torch.distributed.rpc <https://pytorch.org/docs/stable/rpc.html>`__ package
40+
can help with the above scenarios. In case 1, `RPC <https://pytorch.org/docs/stable/rpc.html#rpc>`__
41+
and `RRef <https://pytorch.org/docs/stable/rpc.html#rref>`__ allow sending data
4242
from one worker to another while easily referencing remote data objects. In
43-
case 2, `distributed autograd <https://pytorch.org/docs/master/rpc.html#distributed-autograd-framework>`__
44-
and `distributed optimizer <https://pytorch.org/docs/master/rpc.html#module-torch.distributed.optim>`__
43+
case 2, `distributed autograd <https://pytorch.org/docs/stable/rpc.html#distributed-autograd-framework>`__
44+
and `distributed optimizer <https://pytorch.org/docs/stable/rpc.html#module-torch.distributed.optim>`__
4545
make executing backward pass and optimizer step as if it is local training. In
4646
the next two sections, we will demonstrate APIs of
47-
`torch.distributed.rpc <https://pytorch.org/docs/master/rpc.html>`__ using a
47+
`torch.distributed.rpc <https://pytorch.org/docs/stable/rpc.html>`__ using a
4848
reinforcement learning example and a language model example. Please note, this
4949
tutorial does not aim at building the most accurate or efficient models to
5050
solve given problems, instead, the main goal here is to show how to use the
51-
`torch.distributed.rpc <https://pytorch.org/docs/master/rpc.html>`__ package to
51+
`torch.distributed.rpc <https://pytorch.org/docs/stable/rpc.html>`__ package to
5252
build distributed training applications.
5353

5454

@@ -289,10 +289,10 @@ observers. The agent serves as master by repeatedly calling ``run_episode`` and
289289
``finish_episode`` until the running reward surpasses the reward threshold
290290
specified by the environment. All observers passively waiting for commands
291291
from the agent. The code is wrapped by
292-
`rpc.init_rpc <https://pytorch.org/docs/master/rpc.html#torch.distributed.rpc.init_rpc>`__ and
293-
`rpc.shutdown <https://pytorch.org/docs/master/rpc.html#torch.distributed.rpc.shutdown>`__,
292+
`rpc.init_rpc <https://pytorch.org/docs/stable/rpc.html#torch.distributed.rpc.init_rpc>`__ and
293+
`rpc.shutdown <https://pytorch.org/docs/stable/rpc.html#torch.distributed.rpc.shutdown>`__,
294294
which initializes and terminates RPC instances respectively. More details are
295-
available in the `API page <https://pytorch.org/docs/master/rpc.html>`__.
295+
available in the `API page <https://pytorch.org/docs/stable/rpc.html>`__.
296296

297297

298298
.. code:: python
@@ -442,7 +442,7 @@ takes a GPU tensor, you need to move it to the proper device explicitly.
442442
With the above sub-modules, we can now piece them together using RPC to
443443
create an RNN model. In the code below ``ps`` represents a parameter server,
444444
which hosts parameters of the embedding table and the decoder. The constructor
445-
uses the `remote <https://pytorch.org/docs/master/rpc.html#torch.distributed.rpc.remote>`__
445+
uses the `remote <https://pytorch.org/docs/stable/rpc.html#torch.distributed.rpc.remote>`__
446446
API to create an ``EmbeddingTable`` object and a ``Decoder`` object on the
447447
parameter server, and locally creates the ``LSTM`` sub-module. During the
448448
forward pass, the trainer uses the ``EmbeddingTable`` ``RRef`` to find the

0 commit comments

Comments
 (0)