Skip to content

Commit 265c75a

Browse files
committed
Update outdated custom ops tutorials to point to the new landing page
Also turns on verification for the python custom ops tutorials.
1 parent 3b97695 commit 265c75a

File tree

8 files changed

+25
-16
lines changed

8 files changed

+25
-16
lines changed

.jenkins/validate_tutorials_built.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,6 @@
2929
"intermediate_source/fx_conv_bn_fuser",
3030
"intermediate_source/_torch_export_nightly_tutorial", # does not work on release
3131
"advanced_source/super_resolution_with_onnxruntime",
32-
"advanced_source/python_custom_ops", # https://github.com/pytorch/pytorch/issues/127443
3332
"advanced_source/ddp_pipeline", # requires 4 gpus
3433
"advanced_source/usb_semisup_learn", # fails with CUDA OOM error, should try on a different worker
3534
"prototype_source/fx_graph_mode_ptq_dynamic",

advanced_source/cpp_custom_ops.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -415,4 +415,4 @@ Conclusion
415415
In this tutorial, we went over the recommended approach to integrating Custom C++
416416
and CUDA operators with PyTorch. The ``TORCH_LIBRARY/torch.library`` APIs are fairly
417417
low-level. For more information about how to use the API, see
418-
`The Custom Operators Manual <https://pytorch.org/docs/main/notes/custom_operators.html>`_.
418+
`The Custom Operators Manual <https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html#the-custom-operators-manual>`_.

advanced_source/cpp_extension.rst

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,10 @@ Custom C++ and CUDA Extensions
22
==============================
33
**Author**: `Peter Goldsborough <https://www.goldsborough.me/>`_
44

5+
.. warning::
6+
7+
This tutorial is deprecated as of PyTorch 2.4. Please see :ref:`custom-ops-landing-page`
8+
for the newest up-to-date guides on extending PyTorch with Custom C++/CUDA Extensions.
59

610
PyTorch provides a plethora of operations related to neural networks, arbitrary
711
tensor algebra, data wrangling and other purposes. However, you may still find
@@ -225,7 +229,7 @@ Instead of:
225229
Currently open issue for nvcc bug `here
226230
<https://github.com/pytorch/pytorch/issues/69460>`_.
227231
Complete workaround code example `here
228-
<https://github.com/facebookresearch/pytorch3d/commit/cb170ac024a949f1f9614ffe6af1c38d972f7d48>`_.
232+
<https://github.com/facebookresearch/pytorch3d/commit/cb170ac024a949f1f9614ffe6af1c38d972f7d48>`_.
229233

230234
Forward Pass
231235
************

advanced_source/custom_ops_landing_page.rst

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,17 +19,19 @@ Authoring a custom operator from Python
1919
Please see :ref:`python-custom-ops-tutorial`.
2020

2121
You may wish to author a custom operator from Python (as opposed to C++) if:
22+
2223
- you have a Python function you want PyTorch to treat as an opaque callable, especially with
23-
respect to ``torch.compile`` and ``torch.export``.
24+
respect to ``torch.compile`` and ``torch.export``.
2425
- you have some Python bindings to C++/CUDA kernels and want those to compose with PyTorch
25-
subsystems (like ``torch.compile`` or ``torch.autograd``)
26+
subsystems (like ``torch.compile`` or ``torch.autograd``)
2627

2728
Integrating custom C++ and/or CUDA code with PyTorch
2829
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2930

3031
Please see :ref:`cpp-custom-ops-tutorial`.
3132

3233
You may wish to author a custom operator from C++ (as opposed to Python) if:
34+
3335
- you have custom C++ and/or CUDA code.
3436
- you plan to use this code with ``AOTInductor`` to do Python-less inference.
3537

advanced_source/dispatcher.rst

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,11 @@
11
Registering a Dispatched Operator in C++
22
========================================
33

4+
.. warning::
5+
6+
This tutorial is deprecated as of PyTorch 2.4. Please see :ref:`custom-ops-landing-page`
7+
for the newest up-to-date guides on extending PyTorch with Custom Operators.
8+
49
The dispatcher is an internal component of PyTorch which is responsible for
510
figuring out what code should actually get run when you call a function like
611
``torch::add``. This can be nontrivial, because PyTorch operations need

advanced_source/python_custom_ops.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -258,5 +258,5 @@ def f(x):
258258
# For more detailed information, see:
259259
#
260260
# - `the torch.library documentation <https://pytorch.org/docs/stable/library.html>`_
261-
# - `the Custom Operators Manual <https://pytorch.org/docs/main/notes/custom_operators.html>`_
261+
# - `the Custom Operators Manual <https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html#the-custom-operators-manual>`_
262262
#

advanced_source/torch_script_custom_ops.rst

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,11 @@
11
Extending TorchScript with Custom C++ Operators
22
===============================================
33

4+
.. warning::
5+
6+
This tutorial is deprecated as of PyTorch 2.4. Please see :ref:`custom-ops-landing-page`
7+
for the newest up-to-date guides on PyTorch Custom Operators.
8+
49
The PyTorch 1.0 release introduced a new programming model to PyTorch called
510
`TorchScript <https://pytorch.org/docs/master/jit.html>`_. TorchScript is a
611
subset of the Python programming language which can be parsed, compiled and

intermediate_source/torch_export_tutorial.py

Lines changed: 4 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -544,25 +544,19 @@ def suggested_fixes():
544544
#
545545
# Currently, the steps to register a custom op for use by ``torch.export`` are:
546546
#
547-
# - Define the custom op using ``torch.library`` (`reference <https://pytorch.org/docs/main/library.html>`__)
547+
# - Define the custom op using ``torch.library`` (`reference <https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html>`__)
548548
# as with any other custom op
549549

550-
from torch.library import Library, impl, impl_abstract
551-
552-
m = Library("my_custom_library", "DEF")
553-
554-
m.define("custom_op(Tensor input) -> Tensor")
555-
556-
@impl(m, "custom_op", "CompositeExplicitAutograd")
557-
def custom_op(x):
550+
@torch.library.custom_op("my_custom_library::custom_op", mutates_args={})
551+
def custom_op(input: torch.Tensor) -> torch.Tensor:
558552
print("custom_op called!")
559553
return torch.relu(x)
560554

561555
######################################################################
562556
# - Define a ``"Meta"`` implementation of the custom op that returns an empty
563557
# tensor with the same shape as the expected output
564558

565-
@impl_abstract("my_custom_library::custom_op")
559+
@custom_op.register_fake
566560
def custom_op_meta(x):
567561
return torch.empty_like(x)
568562

0 commit comments

Comments
 (0)