You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: advanced_source/extend_dispatcher.rst
+5-5Lines changed: 5 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ to `register a dispatched operator in C++ <dispatcher>`_ and how to write a
17
17
What's a new backend?
18
18
---------------------
19
19
20
-
Adding a new backend to PyTorch requires a lot of developement and maintainence from backend extenders.
20
+
Adding a new backend to PyTorch requires a lot of development and maintenance from backend extenders.
21
21
Before adding a new backend, let's first consider a few common use cases and recommended solutions for them:
22
22
23
23
* If you have new algorithms for an existing PyTorch operator, send a PR to PyTorch.
@@ -30,7 +30,7 @@ Before adding a new backend, let's first consider a few common use cases and rec
30
30
31
31
In this tutorial we'll mainly focus on adding a new out-of-tree device below. Adding out-of-tree support
32
32
for a different tensor layout might share many common steps with devices, but we haven't seen an example of
33
-
such integrations yet so it might require addtional work from PyTorch to support it.
33
+
such integrations yet so it might require additional work from PyTorch to support it.
34
34
35
35
Get a dispatch key for your backend
36
36
-----------------------------------
@@ -67,12 +67,12 @@ To create a Tensor on ``PrivateUse1`` backend, you need to set dispatch key in `
67
67
Note that ``TensorImpl`` class above assumes your Tensor is backed by a storage like CPU/CUDA. We also
68
68
provide ``OpaqueTensorImpl`` for backends without a storage. And you might need to tweak/override certain
69
69
methods to fit your customized hardware.
70
-
One example in pytorch repo is `Vulkan TensorImpl <https://github.com/pytorch/pytorch/blob/1.7/aten/src/ATen/native/vulkan/VulkanOpaqueTensorImpl.h>`_.
70
+
One example in pytorch repo is `Vulkan TensorImpl <https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/vulkan/VulkanOpaqueTensorImpl.h>`_.
71
71
72
72
73
73
.. note::
74
74
Once the prototype is done and you plan to do regular releases for your backend extension, please feel free to
75
-
submit a PR to ``pytorch/pytorch`` to reserve a dedicated dispath key for your backend.
75
+
submit a PR to ``pytorch/pytorch`` to reserve a dedicated dispatch key for your backend.
76
76
77
77
78
78
Get the full list of PyTorch operators
@@ -361,7 +361,7 @@ actively working on might improve the experience in the future:
361
361
362
362
* Improve test coverage of generic testing framework.
363
363
* Improve ``Math`` kernel coverage and more comprehensive tests to make sure ``Math``
364
-
kernel bahavior matches other backends like ``CPU/CUDA``.
364
+
kernel behavior matches other backends like ``CPU/CUDA``.
365
365
* Refactor ``RegistrationDeclarations.h`` to carry the minimal information and reuse
366
366
PyTorch's codegen as much as possible.
367
367
* Support a backend fallback kernel to automatic convert inputs to CPU and convert the
0 commit comments