Skip to content

Commit d13605d

Browse files
committed
Fixes #2991
1 parent df285cf commit d13605d

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

advanced_source/extend_dispatcher.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ to `register a dispatched operator in C++ <dispatcher>`_ and how to write a
1717
What's a new backend?
1818
---------------------
1919

20-
Adding a new backend to PyTorch requires a lot of developement and maintainence from backend extenders.
20+
Adding a new backend to PyTorch requires a lot of development and maintenance from backend extenders.
2121
Before adding a new backend, let's first consider a few common use cases and recommended solutions for them:
2222

2323
* If you have new algorithms for an existing PyTorch operator, send a PR to PyTorch.
@@ -30,7 +30,7 @@ Before adding a new backend, let's first consider a few common use cases and rec
3030

3131
In this tutorial we'll mainly focus on adding a new out-of-tree device below. Adding out-of-tree support
3232
for a different tensor layout might share many common steps with devices, but we haven't seen an example of
33-
such integrations yet so it might require addtional work from PyTorch to support it.
33+
such integrations yet so it might require additional work from PyTorch to support it.
3434

3535
Get a dispatch key for your backend
3636
-----------------------------------
@@ -67,12 +67,12 @@ To create a Tensor on ``PrivateUse1`` backend, you need to set dispatch key in `
6767
Note that ``TensorImpl`` class above assumes your Tensor is backed by a storage like CPU/CUDA. We also
6868
provide ``OpaqueTensorImpl`` for backends without a storage. And you might need to tweak/override certain
6969
methods to fit your customized hardware.
70-
One example in pytorch repo is `Vulkan TensorImpl <https://github.com/pytorch/pytorch/blob/1.7/aten/src/ATen/native/vulkan/VulkanOpaqueTensorImpl.h>`_.
70+
One example in pytorch repo is `Vulkan TensorImpl <https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/vulkan/VulkanOpaqueTensorImpl.h>`_.
7171

7272

7373
.. note::
7474
Once the prototype is done and you plan to do regular releases for your backend extension, please feel free to
75-
submit a PR to ``pytorch/pytorch`` to reserve a dedicated dispath key for your backend.
75+
submit a PR to ``pytorch/pytorch`` to reserve a dedicated dispatch key for your backend.
7676

7777

7878
Get the full list of PyTorch operators
@@ -361,7 +361,7 @@ actively working on might improve the experience in the future:
361361

362362
* Improve test coverage of generic testing framework.
363363
* Improve ``Math`` kernel coverage and more comprehensive tests to make sure ``Math``
364-
kernel bahavior matches other backends like ``CPU/CUDA``.
364+
kernel behavior matches other backends like ``CPU/CUDA``.
365365
* Refactor ``RegistrationDeclarations.h`` to carry the minimal information and reuse
366366
PyTorch's codegen as much as possible.
367367
* Support a backend fallback kernel to automatic convert inputs to CPU and convert the

0 commit comments

Comments
 (0)