Skip to content

Commit fffb6b0

Browse files
committed
update
1 parent 397012b commit fffb6b0

File tree

1 file changed

+60
-0
lines changed

1 file changed

+60
-0
lines changed
Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
.. _custom-ops-landing-page:
2+
3+
PyTorch Custom Operators Landing Page
4+
=====================================
5+
6+
PyTorch offers a large library of operators that work on Tensors (e.g. ``torch.add``,
7+
``torch.sum``, etc). However, you may wish to bring a new custom operation to PyTorch
8+
and get it to work with subsystems like ``torch.compile``, autograd, and ``torch.vmap``.
9+
In order to do so, you must register the custom operation with PyTorch via the Python
10+
`torch.library docs <https://pytorch.org/docs/stable/library.html>`_ or C++ ``TORCH_LIBRARY``
11+
APIs.
12+
13+
TL;DR
14+
-----
15+
16+
How do I author a custom op from Python?
17+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
18+
19+
Please see :ref:`python-custom-ops-tutorial`.
20+
21+
You may wish to author a custom operator from Python (as opposed to C++) if:
22+
- you have a Python function you want PyTorch to treat as an opaque callable, especially with
23+
respect to ``torch.compile`` and ``torch.export``.
24+
- you have some Python bindings to C++/CUDA kernels and want those to compose with PyTorch
25+
subsystems (like ``torch.compile`` or ``torch.autograd``)
26+
27+
How do I integrate custom C++ and/or CUDA code with PyTorch?
28+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
29+
30+
Please see :ref:`cpp-custom-ops-tutorial`.
31+
32+
You may wish to author a custom operator from C++ (as opposed to Python) if:
33+
- you have custom C++ and/or CUDA code.
34+
- you plan to use this code with ``AOTInductor`` to do Python-less inference.
35+
36+
The Custom Operators Manual
37+
^^^^^^^^^^^^^^^^^^^^^^^^^^^
38+
39+
For information not covered in the tutorials and this page, please see
40+
`The Custom Operators Manual <https://docs.google.com/document/d/1_W62p8WJOQQUzPsJYa7s701JXt0qf2OfLub2sbkHOaU>`_
41+
(we're working on moving the information to our docs site). We recommend that you
42+
first read one of the tutorials above and then use the Custom Operators Manual as a reference;
43+
it is not meant to be read head to toe.
44+
45+
When should I create a Custom Operator?
46+
---------------------------------------
47+
If your operation is expressible as a composition of built-in PyTorch operators
48+
then please write it as a Python function and call it instead of creating a
49+
custom operator. Use the operator registration APIs to create a custom operator if you
50+
are calling into some library that PyTorch doesn't understand (e.g. custom C/C++ code,
51+
a custom CUDA kernel, or Python bindings to C/C++/CUDA extensions).
52+
53+
Why should I create a Custom Operator?
54+
--------------------------------------
55+
56+
It is possible to use a C/C++/CUDA kernel by grabbing a Tensor's data pointer
57+
and passing it to a pybind'ed kernel. However, this approach doesn't compose with
58+
PyTorch subsystems like autograd, torch.compile, vmap, and more. In order
59+
for an operation to compose with PyTorch subsystems, it must be registered
60+
via the operator registration APIs.

0 commit comments

Comments
 (0)