Skip to content

Commit 898dede

Browse files
committed
morefix
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
1 parent b478ae9 commit 898dede

File tree

1 file changed

+6
-24
lines changed

1 file changed

+6
-24
lines changed

advanced_source/dispatcher.rst

Lines changed: 6 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -33,9 +33,9 @@ Defining schema and backend implementations
3333
The general principle behind the dispatcher is that it divides the
3434
implementation of an operator into multiple kernels, each of which
3535
implements functionality for a specific *dispatch key*; for example,
36-
`CPU`, `CUDA` or `Autograd`. The end effect is that when you call
37-
an operator, we first execute the `Autograd` kernel, and then we
38-
redispatch to the `CPU` or `CUDA` kernel depending on the device
36+
CPU, CUDA or Autograd. The end effect is that when you call
37+
an operator, we first execute the Autograd kernel, and then we
38+
redispatch to the CPU or CUDA kernel depending on the device
3939
types of the passed in tensors.
4040

4141
Let's take a look at the various parts involved in making this
@@ -69,7 +69,7 @@ To do this, we can use the ``TORCH_LIBRARY_IMPL`` macro:
6969
:end-before: END TORCH_LIBRARY_IMPL CPU
7070

7171
The ``TORCH_LIBRARY_IMPL`` lets us register implementations for operators on
72-
a specific dispatch key (in this case, ``CPU``). Each call to ``impl``
72+
a specific dispatch key (in this case, CPU). Each call to ``impl``
7373
associates a CPU kernel with the corresponding operator (which we previously
7474
defined in the ``TORCH_LIBRARY`` block). You can have as many
7575
``TORCH_LIBRARY_IMPL`` blocks for a namespace as you like; so for example,
@@ -147,7 +147,7 @@ The autograd function is written as normal using ``torch::autograd::Function``,
147147
except that instead of directly writing the implementation in ``forward()``,
148148
we:
149149

150-
1. Turn off autograd handling with the `at::AutoNonVariableTypeMode`` RAII
150+
1. Turn off autograd handling with the ``at::AutoNonVariableTypeMode`` RAII
151151
guard, and then
152152
2. Call the dispatch function ``myadd`` to call back into the dispatcher.
153153

@@ -249,24 +249,6 @@ general rules:
249249
* Any operation that does a convolution or gemm under the hood should
250250
probably be float16
251251

252-
..
253-
254-
NB: This doesn't work because torch.ops doesn't support names.
255-
256-
Named
257-
^^^^^
258-
259-
`Named tensors <https://pytorch.org/docs/stable/named_tensor.html>`_ allow
260-
users to associate explicit names with tensor dimensions, and then have those
261-
dimensions be propagated when you run operations on those tensors. If you
262-
define a new operator, you have to also define rules for how names should
263-
be checked and propagated. The Named kernel handles implementing these rules.
264-
265-
.. literalinclude:: ../advanced_source/dispatcher/op.cpp
266-
:language: cpp
267-
:start-after: BEGIN TORCH_LIBRARY_IMPL Named
268-
:end-before: END TORCH_LIBRARY_IMPL Named
269-
270252
Batched
271253
^^^^^^^
272254

@@ -282,5 +264,5 @@ Tracer
282264
The Tracer dispatch key implements support for recording invocations of operators
283265
into a trace when you run ``torch.jit.trace``. We intend to provide a
284266
boxed fallback that will implement tracing for arbitrary operations,
285-
see `issue #41478 <https://github.com/pytorch/pytorch/issues/41478>` to track
267+
see `issue #41478 <https://github.com/pytorch/pytorch/issues/41478>`_ to track
286268
progress.

0 commit comments

Comments
 (0)