@@ -33,9 +33,9 @@ Defining schema and backend implementations
33
33
The general principle behind the dispatcher is that it divides the
34
34
implementation of an operator into multiple kernels, each of which
35
35
implements functionality for a specific *dispatch key *; for example,
36
- ` CPU `, ` CUDA ` or ` Autograd ` . The end effect is that when you call
37
- an operator, we first execute the ` Autograd ` kernel, and then we
38
- redispatch to the ` CPU ` or ` CUDA ` kernel depending on the device
36
+ CPU, CUDA or Autograd. The end effect is that when you call
37
+ an operator, we first execute the Autograd kernel, and then we
38
+ redispatch to the CPU or CUDA kernel depending on the device
39
39
types of the passed in tensors.
40
40
41
41
Let's take a look at the various parts involved in making this
@@ -69,7 +69,7 @@ To do this, we can use the ``TORCH_LIBRARY_IMPL`` macro:
69
69
:end-before: END TORCH_LIBRARY_IMPL CPU
70
70
71
71
The ``TORCH_LIBRARY_IMPL `` lets us register implementations for operators on
72
- a specific dispatch key (in this case, `` CPU `` ). Each call to ``impl ``
72
+ a specific dispatch key (in this case, CPU). Each call to ``impl ``
73
73
associates a CPU kernel with the corresponding operator (which we previously
74
74
defined in the ``TORCH_LIBRARY `` block). You can have as many
75
75
``TORCH_LIBRARY_IMPL `` blocks for a namespace as you like; so for example,
@@ -147,7 +147,7 @@ The autograd function is written as normal using ``torch::autograd::Function``,
147
147
except that instead of directly writing the implementation in ``forward() ``,
148
148
we:
149
149
150
- 1. Turn off autograd handling with the `at::AutoNonVariableTypeMode`` RAII
150
+ 1. Turn off autograd handling with the `` at::AutoNonVariableTypeMode `` RAII
151
151
guard, and then
152
152
2. Call the dispatch function ``myadd `` to call back into the dispatcher.
153
153
@@ -249,24 +249,6 @@ general rules:
249
249
* Any operation that does a convolution or gemm under the hood should
250
250
probably be float16
251
251
252
- ..
253
-
254
- NB: This doesn't work because torch.ops doesn't support names.
255
-
256
- Named
257
- ^^^^^
258
-
259
- `Named tensors <https://pytorch.org/docs/stable/named_tensor.html >`_ allow
260
- users to associate explicit names with tensor dimensions, and then have those
261
- dimensions be propagated when you run operations on those tensors. If you
262
- define a new operator, you have to also define rules for how names should
263
- be checked and propagated. The Named kernel handles implementing these rules.
264
-
265
- .. literalinclude :: ../advanced_source/dispatcher/op.cpp
266
- :language: cpp
267
- :start-after: BEGIN TORCH_LIBRARY_IMPL Named
268
- :end-before: END TORCH_LIBRARY_IMPL Named
269
-
270
252
Batched
271
253
^^^^^^^
272
254
@@ -282,5 +264,5 @@ Tracer
282
264
The Tracer dispatch key implements support for recording invocations of operators
283
265
into a trace when you run ``torch.jit.trace ``. We intend to provide a
284
266
boxed fallback that will implement tracing for arbitrary operations,
285
- see `issue #41478 <https://github.com/pytorch/pytorch/issues/41478> ` to track
267
+ see `issue #41478 <https://github.com/pytorch/pytorch/issues/41478 >`_ to track
286
268
progress.
0 commit comments