diff --git a/prototype_source/pt2e_quant_ptq_x86_inductor.rst b/prototype_source/pt2e_quant_ptq_x86_inductor.rst index f2cabe88949..60bd5ffa5a4 100644 --- a/prototype_source/pt2e_quant_ptq_x86_inductor.rst +++ b/prototype_source/pt2e_quant_ptq_x86_inductor.rst @@ -8,6 +8,7 @@ Prerequisites - `PyTorch 2 Export Post Training Quantization `_ - `TorchInductor and torch.compile concepts in PyTorch `_ +- `Inductor C++ Wrapper concepts `_ Introduction ^^^^^^^^^^^^^^ @@ -161,7 +162,18 @@ After these steps, we finished running the quantization flow and we will get the 3. Lower into Inductor ------------------------ -After we get the quantized model, we will further lower it to the inductor backend. +After we get the quantized model, we will further lower it to the inductor backend. The default Inductor wrapper +generates Python code to invoke both generated kernels and external kernels. Additionally, Inductor supports +C++ wrapper that generates pure C++ code. This allows seamless integration of the generated and external kernels, +effectively reducing Python overhead. In the future, leveraging the C++ wrapper, we can extend the capability +to achieve pure C++ deployment. For more comprehensive details about C++ Wrapper in general, please refer to the +dedicated tutorial on `Inductor C++ Wrapper Tutorial `_. + +:: + + # Optional: using the C++ wrapper instead of default Python wrapper + import torch._inductor.config as config + config.cpp_wrapper = True ::