From f689ac8bb95aec7d902102cc84d6a035f9e48a3b Mon Sep 17 00:00:00 2001 From: Shen Li Date: Sat, 11 Apr 2020 15:05:57 -0700 Subject: [PATCH] Update links in RPC tutorial --- intermediate_source/rpc_tutorial.rst | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/intermediate_source/rpc_tutorial.rst b/intermediate_source/rpc_tutorial.rst index 6d149e80837..e88030b5e5f 100644 --- a/intermediate_source/rpc_tutorial.rst +++ b/intermediate_source/rpc_tutorial.rst @@ -4,7 +4,7 @@ Getting Started with Distributed RPC Framework This tutorial uses two simple examples to demonstrate how to build distributed -training with the `torch.distributed.rpc `__ +training with the `torch.distributed.rpc `__ package which is first introduced as an experimental feature in PyTorch v1.4. Source code of the two examples can be found in `PyTorch examples `__. @@ -31,19 +31,19 @@ paradigms. For example: machines. -The `torch.distributed.rpc `__ package -can help with the above scenarios. In case 1, `RPC `__ -and `RRef `__ allow sending data +The `torch.distributed.rpc `__ package +can help with the above scenarios. In case 1, `RPC `__ +and `RRef `__ allow sending data from one worker to another while easily referencing remote data objects. In -case 2, `distributed autograd `__ -and `distributed optimizer `__ +case 2, `distributed autograd `__ +and `distributed optimizer `__ make executing backward pass and optimizer step as if it is local training. In the next two sections, we will demonstrate APIs of -`torch.distributed.rpc `__ using a +`torch.distributed.rpc `__ using a reinforcement learning example and a language model example. Please note, this tutorial does not aim at building the most accurate or efficient models to solve given problems, instead, the main goal here is to show how to use the -`torch.distributed.rpc `__ package to +`torch.distributed.rpc `__ package to build distributed training applications. @@ -305,10 +305,10 @@ observers. The agent serves as master by repeatedly calling ``run_episode`` and ``finish_episode`` until the running reward surpasses the reward threshold specified by the environment. All observers passively waiting for commands from the agent. The code is wrapped by -`rpc.init_rpc `__ and -`rpc.shutdown `__, +`rpc.init_rpc `__ and +`rpc.shutdown `__, which initializes and terminates RPC instances respectively. More details are -available in the `API page `__. +available in the `API page `__. .. code:: python @@ -458,7 +458,7 @@ takes a GPU tensor, you need to move it to the proper device explicitly. With the above sub-modules, we can now piece them together using RPC to create an RNN model. In the code below ``ps`` represents a parameter server, which hosts parameters of the embedding table and the decoder. The constructor -uses the `remote `__ +uses the `remote `__ API to create an ``EmbeddingTable`` object and a ``Decoder`` object on the parameter server, and locally creates the ``LSTM`` sub-module. During the forward pass, the trainer uses the ``EmbeddingTable`` ``RRef`` to find the