Skip to content

Dispatcher tutorial #1072

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jul 22, 2020
Merged

Dispatcher tutorial #1072

merged 4 commits into from
Jul 22, 2020

Conversation

ezyang
Copy link
Contributor

@ezyang ezyang commented Jul 15, 2020

Signed-off-by: Edward Z. Yang ezyang@fb.com

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
@netlify
Copy link

netlify bot commented Jul 15, 2020

Deploy preview for pytorch-tutorials-preview ready!

Built with commit 2a96e0d

https://deploy-preview-1072--pytorch-tutorials-preview.netlify.app

ezyang added 2 commits July 15, 2020 13:44
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Copy link
Contributor

@ailzhang ailzhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!! Looks great to me!

Copy link

@bhosmer bhosmer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really good, and so good to have. Handful of minor suggestions inline but nothing important.

:end-before: END myadd_cpu

We'd like to register this function as an implementation of ``myops::myadd``, but we
don't want to register it as a catch-all kernel to be run in all cases; we
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if later text relies on the notion of catch-all being introduced here, but for the immediate explanation you could probably just omit it - "we'd like to register this function as an implementation... but we only want it to be run..."

defined in the ``TORCH_LIBRARY`` block). You can have as many
``TORCH_LIBRARY_IMPL`` blocks for a namespace as you like; so for example,
if we also have a CUDA implementation ``myadd_cuda``, we can register it
with:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might this paragraph leave you with the impression that you need a separate TORCH_LIBRARY_IMPL block for each m.impl() call?


These registrations can be split across files or even across library boundaries; so
for example, you could have these two ``TORCH_LIBRARY_IMPL`` blocks compiled
into a separate ``myops_cpu`` and ``myops_cuda`` dynamic library.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"into separate ... dynamic libraries", I think

core operators in PyTorch? This is how XLA support for PyTorch is
implemented: the ``torch_xla`` library contains a ``TORCH_LIBRARY_IMPL``
that provides implementations for all basic operators on the XLA dispatch
key.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yeah this is a perfect aside

2. Call the dispatch function ``myadd`` to call back into the dispatcher.

Without (1), your calls will infinite loop (and stack overflow), because
``myadd`` will send you back to the autograd implementation! With (1),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure how to integrate the information smoothly, but it might clarify to note that we get back here because this function will be registered to Autograd (per what immediately follows ofc)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
@ezyang ezyang merged commit b6ffdb9 into master Jul 22, 2020
@brianjo brianjo deleted the pr/dispatcher-tutorial branch August 6, 2020 18:05
rodrigo-techera pushed a commit to Experience-Monks/tutorials that referenced this pull request Nov 29, 2021
* Dispatcher tutorial

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* typofix

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

* morefix

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants