Skip to content

💡 [REQUEST] - Write a Tutorial for Using, Debugging, Performance Profiling torch.compile with Inductor CPU backend #2348

Closed
@jgong5

Description

@jgong5

🚀 Descirbe the improvement or the new tutorial

PyTorch 2.0 introduced the flagship compilation API, torch.compile, which offers a significant speedup over eager mode execution through graph-level optimization powered by the default TorchInductor backend. While this new feature has generated excitement within the PyTorch community, there is a lack of comprehensive tutorials that delve into the intricacies of torch.compile. The existing tutorials primarily focus on basic usage while missing the essential aspects such as exploring the underlying generated code, debugging potential issues, and conducting performance profiling. Therefore, this proposal aims to address this gap by creating an in-depth tutorial specifically designed for the Inductor CPU backend.

Existing tutorials on this topic

https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html

Additional context

We aim to complete the document as part of PyTorch Docathon 2023. cc @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @ZailiWang @ZhaoqiongZ @leslie-fang-intel @Xia-Weiwen @sekahler2 @CaoE @zhuhaozhe @Valentine233 @EikanWang

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions