You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Debugging JIT compiled code is challenging, given the complexity of modern
640
+
compilers and the daunting errors that they raise.
641
+
`The tutorial on how to diagnose runtime errors within torch.compile <https://pytorch.org/docs/main/torch.compiler_troubleshooting.html#diagnosing-runtime-errors>`__
642
+
contains a few tips and tricks on how to tackle this task.
643
+
644
+
If the above is not enough to pinpoint the origin of the issue, there are still
645
+
a few other NumPy-specific tools we can use. We can discern whether the bug
646
+
is entirely in the PyTorch code by disabling tracing through NumPy functions:
647
+
648
+
649
+
.. code-block:: python
650
+
651
+
from torch._dynamo import config
652
+
config.trace_numpy =False
653
+
654
+
If the bug lies in the traced NumPy code, we can execute the NumPy code eagerly (without ``torch.compile``)
655
+
using PyTorch as a backend by importing ``import torch._numpy as np``.
656
+
This should just be used for **debugging purposes** and is in no way a
657
+
replacement for the PyTorch API, as it is **much less performant** and, as a
658
+
private API, **may change without notice**. At any rate, ``torch._numpy`` is a
659
+
Python implementation of NumPy in terms of PyTorch and it is used internally by ``torch.compile`` to
660
+
transform NumPy code into Pytorch code. It is rather easy to read and modify,
661
+
so if you find any bug in it feel free to submit a PR fixing it or simply open
662
+
an issue.
663
+
664
+
If the program does work when importing ``torch._numpy as np``, chances are
665
+
that the bug is in TorchDynamo. If this is the case, please feel open an issue
666
+
with a `minimal reproducer <https://pytorch.org/docs/2.1/torch.compiler_troubleshooting.html>`__.
667
+
668
+
I ``torch.compile`` some NumPy code and I did not see any speed-up.
`tutorial with general advice for how to debug these sort of torch.compile issues <https://pytorch.org/docs/main/torch.compiler_faq.html#why-am-i-not-seeing-speedups>`__.
673
+
674
+
Some graph breaks may happen because of the use of unsupported features. See
675
+
:ref:`nonsupported-numpy-feats`. More generally, it is useful to keep in mind
676
+
that some widely used NumPy features do not play well with compilers. For
677
+
example, in-place modifications make reasoning difficult within the compiler and
678
+
often yield worse performance than their out-of-place counterparts.As such, it is best to avoid
679
+
them. Same goes for the use of the ``out=`` parameter. Instead, prefer
680
+
out-of-place ops and let ``torch.compile`` optimize the memory use. Same goes
681
+
for data-dependent ops like masked indexing through boolean masks, or
682
+
data-dependent control flow like ``if`` or ``while`` constructions.
0 commit comments