Open
Description
Description
@ricardoV94 did a nice perf improvement in pymc-devs/pymc#7578 to try to speedup jitted backends. I tried out torch as well. The model performed quite slow.
mode | t_sampling (seconds) | manual measure (seconds) |
---|---|---|
NUMBA | 2.483 | 11.346 |
PYTORCH (COMPILED) | 206.503 | 270.188 |
PYTORCH (EAGER) | 60.607 | 64.140 |
We need to investigate why
- Torch is so slow
- Torch compile is slower than eager mode
When doing perf evaluations, keep in mind that torch does a lot of caching. If you want a truly cache-less eval, you can either add torch.compiler.reset()
or set the env variable to disable the dynamo cache (google it).
Metadata
Metadata
Assignees
Labels
No labels