Skip to content

Commit 7f49f80

Browse files
authored
Merge branch 'main' into angelayi/export_fix
2 parents 200c594 + 20c3111 commit 7f49f80

File tree

3 files changed

+9
-0
lines changed

3 files changed

+9
-0
lines changed

advanced_source/custom_ops_landing_page.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ You may wish to author a custom operator from Python (as opposed to C++) if:
2323
respect to ``torch.compile`` and ``torch.export``.
2424
- you have some Python bindings to C++/CUDA kernels and want those to compose with PyTorch
2525
subsystems (like ``torch.compile`` or ``torch.autograd``)
26+
- you are using Python (and not a C++-only environment like AOTInductor).
2627

2728
Integrating custom C++ and/or CUDA code with PyTorch
2829
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

advanced_source/python_custom_ops.py

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,12 @@
3030
into the function).
3131
- Adding training support to an arbitrary Python function
3232
33+
Use :func:`torch.library.custom_op` to create Python custom operators.
34+
Use the C++ ``TORCH_LIBRARY`` APIs to create C++ custom operators (these
35+
work in Python-less environments).
36+
See the `Custom Operators Landing Page <https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html>`_
37+
for more details.
38+
3339
Please note that if your operation can be expressed as a composition of
3440
existing PyTorch operators, then there is usually no need to use the custom operator
3541
API -- everything (for example ``torch.compile``, training support) should

en-wordlist.txt

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -392,6 +392,8 @@ FlexAttention
392392
fp
393393
frontend
394394
functionalized
395+
functionalizes
396+
functionalization
395397
functorch
396398
fuser
397399
geomean

0 commit comments

Comments
 (0)