From 3d3ba2006ace096c0119527b9e7e606337aa7dbb Mon Sep 17 00:00:00 2001 From: Zesheng Zong <54812088+zeshengzong@users.noreply.github.com> Date: Sat, 19 Oct 2024 01:08:43 +0800 Subject: [PATCH] Fix format error on torch._dynamo.config.inline_inbuilt_nn_modules --- _posts/2024-10-17-pytorch2-5.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/_posts/2024-10-17-pytorch2-5.md b/_posts/2024-10-17-pytorch2-5.md index 760f4946f175..4aa18d841b61 100644 --- a/_posts/2024-10-17-pytorch2-5.md +++ b/_posts/2024-10-17-pytorch2-5.md @@ -81,7 +81,7 @@ The cuDNN "Fused Flash Attention" backend was landed for *torch.nn.functional.s ### [Beta] *torch.compile* regional compilation without recompilations -Regional compilation without recompilations, via* torch._dynamo.config.inline_inbuilt_nn_modules* which default to True in 2.5+. This option allows users to compile a repeated *nn.Module* (e.g. a transformer layer in LLM) without recompilations. Compared to compiling the full model, this option can result in smaller compilation latencies with 1%-5% performance degradation compared to full model compilation. +Regional compilation without recompilations, via *torch._dynamo.config.inline_inbuilt_nn_modules* which default to True in 2.5+. This option allows users to compile a repeated *nn.Module* (e.g. a transformer layer in LLM) without recompilations. Compared to compiling the full model, this option can result in smaller compilation latencies with 1%-5% performance degradation compared to full model compilation. See the [tutorial](https://pytorch.org/tutorials/recipes/regional_compilation.html) for more information. @@ -152,4 +152,4 @@ Intel GPUs support enhancement is now available for both IntelĀ® Data Center GPU * The implementation of SYCL* kernels to enhance coverage and execution of Aten operators on Intel GPUs to boost performance in PyTorch eager mode. * Enhanced Intel GPU backend of torch.compile to improve inference and training performance for a wide range of deep learning workloads. -These features are available through PyTorch preview and nightly binary PIP wheels. For more information regarding Intel GPU support, please refer to [documentation](https://pytorch.org/docs/main/notes/get_start_xpu.html). \ No newline at end of file +These features are available through PyTorch preview and nightly binary PIP wheels. For more information regarding Intel GPU support, please refer to [documentation](https://pytorch.org/docs/main/notes/get_start_xpu.html).