Skip to content

Commit 8c9da61

Browse files
committed
Recommend python agnosticism in cpp custom op tutorial
1 parent df9e226 commit 8c9da61

File tree

1 file changed

+50
-31
lines changed

1 file changed

+50
-31
lines changed

advanced_source/cpp_custom_ops.rst

Lines changed: 50 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -62,41 +62,30 @@ Using ``cpp_extension`` is as simple as writing the following ``setup.py``:
6262
6363
setup(name="extension_cpp",
6464
ext_modules=[
65-
cpp_extension.CppExtension("extension_cpp", ["muladd.cpp"])],
66-
cmdclass={'build_ext': cpp_extension.BuildExtension})
65+
cpp_extension.CppExtension(
66+
"extension_cpp",
67+
["muladd.cpp"]
68+
py_limited_api=True)],
69+
cmdclass={'build_ext': cpp_extension.BuildExtension},
70+
options={"bdist_wheel": {"py_limited_api": "cp39"}}
71+
)
6772
6873
If you need to compile CUDA code (for example, ``.cu`` files), then instead use
6974
`torch.utils.cpp_extension.CUDAExtension <https://pytorch.org/docs/stable/cpp_extension.html#torch.utils.cpp_extension.CUDAExtension>`_.
7075
Please see `extension-cpp <https://github.com/pytorch/extension-cpp>`_ for an
7176
example for how this is set up.
7277

73-
Starting with PyTorch 2.6, you can now build a single wheel for multiple CPython
74-
versions (similar to what you would do for pure python packages). In particular,
78+
Note that you can build a single wheel for multiple CPython versions (similar to
79+
what you would do for pure python packages) starting with PyTorch 2.6. Specifically,
7580
if your custom library adheres to the `CPython Stable Limited API
7681
<https://docs.python.org/3/c-api/stable.html>`_ or avoids CPython entirely, you
7782
can build one Python agnostic wheel against a minimum supported CPython version
78-
through setuptools' ``py_limited_api`` flag, like so:
79-
80-
.. code-block:: python
81-
82-
from setuptools import setup, Extension
83-
from torch.utils import cpp_extension
84-
85-
setup(name="extension_cpp",
86-
ext_modules=[
87-
cpp_extension.CppExtension(
88-
"extension_cpp",
89-
["python_agnostic_code.cpp"],
90-
py_limited_api=True)],
91-
cmdclass={'build_ext': cpp_extension.BuildExtension},
92-
options={"bdist_wheel": {"py_limited_api": "cp39"}}
93-
)
83+
through setuptools' ``py_limited_api`` flag.
9484

95-
Note that you must specify ``py_limited_api=True`` both within ``setup``
85+
It is necessary to specify ``py_limited_api=True`` both within ``setup``
9686
and also as an option to the ``"bdist_wheel"`` command with the minimal supported
9787
Python version (in this case, 3.9). This ``setup`` would build one wheel that could
98-
be installed across multiple Python versions ``python>=3.9``. Please see
99-
`torchao <https://github.com/pytorch/ao>`_ for an example.
88+
be installed across multiple Python versions ``python>=3.9``.
10089

10190
.. note::
10291

@@ -105,7 +94,7 @@ be installed across multiple Python versions ``python>=3.9``. Please see
10594
to build a wheel that looks Python agnostic but will crash, or worse, be silently
10695
incorrect, in another Python environment. Take care to avoid using unstable CPython
10796
APIs, for example APIs from libtorch_python (in particular pytorch/python bindings,)
108-
and to only use APIs from libtorch (aten objects, operators and the dispatcher).
97+
and to only use APIs from libtorch (ATen objects, operators and the dispatcher).
10998
For example, to give access to custom ops from Python, the library should register
11099
the ops through the dispatcher (covered below!).
111100

@@ -255,13 +244,43 @@ first load the C++ library that holds the custom operator definition
255244
and then call the ``torch.library`` registration APIs. This can happen in one
256245
of two ways:
257246

258-
1. If you're following this tutorial, importing the Python C extension module
259-
we created will load the C++ custom operator definitions.
260-
2. If your C++ custom operator is located in a shared library object, you can
261-
also use ``torch.ops.load_library("/path/to/library.so")`` to load it. This
262-
is the blessed path for Python agnosticism, as you will not have a Python C
263-
extension module to import. See `torchao __init__.py <https://github.com/pytorch/ao/blob/881e84b4398eddcea6fee4d911fc329a38b5cd69/torchao/__init__.py#L26-L28>`_
264-
for an example.
247+
248+
1. In this tutorial, our C++ custom operator is located in a shared library object,
249+
and we use ``torch.ops.load_library("/path/to/library.so")`` to load it. This
250+
is the blessed path for Python agnosticism, and you will not have a Python C
251+
extension module to import. See our `extension_cpp/__init__.py <https://github.com/pytorch/extension-cpp/blob/e4c4eb822889ea67f191071fa627d750e04bf047/extension_cpp/__init__.py>`_
252+
for an example:
253+
254+
.. code-block:: python
255+
256+
import torch
257+
from pathlib import Path
258+
259+
so_files = list(Path(__file__).parent.glob("_C*.so"))
260+
assert (
261+
len(so_files) == 1
262+
), f"Expected one _C*.so file, found {len(so_files)}"
263+
torch.ops.load_library(so_files[0])
264+
265+
from . import ops
266+
267+
268+
2. You may also see other custom extensions importing the Python C extension module.
269+
The module would be created in C++ and then imported in Python, like the code below.
270+
This code is not guaranteed to use the stable limited CPython API and would block
271+
your extension from building a Python-agnostic wheel! AVOID the following:
272+
273+
.. code-block:: cpp
274+
275+
// in, say, not_agnostic/csrc/extension_BAD.cpp
276+
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {}
277+
278+
and later imported in Python like so:
279+
280+
.. code-block:: python
281+
282+
# in, say, extension_BAD/__init__.py
283+
from . import _C
265284
266285
267286
Adding training (autograd) support for an operator

0 commit comments

Comments
 (0)