-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Dispatcher tutorial #1072
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dispatcher tutorial #1072
Conversation
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Deploy preview for pytorch-tutorials-preview ready! Built with commit 2a96e0d https://deploy-preview-1072--pytorch-tutorials-preview.netlify.app |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!! Looks great to me!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is really good, and so good to have. Handful of minor suggestions inline but nothing important.
advanced_source/dispatcher.rst
Outdated
:end-before: END myadd_cpu | ||
|
||
We'd like to register this function as an implementation of ``myops::myadd``, but we | ||
don't want to register it as a catch-all kernel to be run in all cases; we |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know if later text relies on the notion of catch-all being introduced here, but for the immediate explanation you could probably just omit it - "we'd like to register this function as an implementation... but we only want it to be run..."
advanced_source/dispatcher.rst
Outdated
defined in the ``TORCH_LIBRARY`` block). You can have as many | ||
``TORCH_LIBRARY_IMPL`` blocks for a namespace as you like; so for example, | ||
if we also have a CUDA implementation ``myadd_cuda``, we can register it | ||
with: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might this paragraph leave you with the impression that you need a separate TORCH_LIBRARY_IMPL block for each m.impl() call?
advanced_source/dispatcher.rst
Outdated
|
||
These registrations can be split across files or even across library boundaries; so | ||
for example, you could have these two ``TORCH_LIBRARY_IMPL`` blocks compiled | ||
into a separate ``myops_cpu`` and ``myops_cuda`` dynamic library. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"into separate ... dynamic libraries", I think
core operators in PyTorch? This is how XLA support for PyTorch is | ||
implemented: the ``torch_xla`` library contains a ``TORCH_LIBRARY_IMPL`` | ||
that provides implementations for all basic operators on the XLA dispatch | ||
key. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yeah this is a perfect aside
advanced_source/dispatcher.rst
Outdated
2. Call the dispatch function ``myadd`` to call back into the dispatcher. | ||
|
||
Without (1), your calls will infinite loop (and stack overflow), because | ||
``myadd`` will send you back to the autograd implementation! With (1), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure how to integrate the information smoothly, but it might clarify to note that we get back here because this function will be registered to Autograd
(per what immediately follows ofc)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Dispatcher tutorial Signed-off-by: Edward Z. Yang <ezyang@fb.com> * typofix Signed-off-by: Edward Z. Yang <ezyang@fb.com> * morefix Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Signed-off-by: Edward Z. Yang ezyang@fb.com