From 6319e6188e96f4142094df2bb4e7be3d250a6589 Mon Sep 17 00:00:00 2001 From: Oguz Ulgen Date: Thu, 27 Feb 2025 14:47:32 -0800 Subject: [PATCH] Fix caching tutorial order --- recipes_source/torch_compile_caching_tutorial.rst | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/recipes_source/torch_compile_caching_tutorial.rst b/recipes_source/torch_compile_caching_tutorial.rst index ebc831cdb90..12fcad4163f 100644 --- a/recipes_source/torch_compile_caching_tutorial.rst +++ b/recipes_source/torch_compile_caching_tutorial.rst @@ -62,16 +62,17 @@ Consider the following example. First, compile and save the cache artifacts. artifacts = torch.compiler.save_cache_artifacts() - # Now, potentially store these artifacts in a database + assert artifacts is not None + artifact_bytes, cache_info = artifacts + + # Now, potentially store artifact_bytes in a database + # You can use cache_info for logging Later, you can jump-start the cache by the following: .. code-block:: python # Potentially download/fetch the artifacts from the database - assert artifacts is not None - artifact_bytes, cache_info = artifacts - torch.compiler.load_cache_artifacts(artifact_bytes) This operation populates all the modular caches that will be discussed in the next section, including ``PGO``, ``AOTAutograd``, ``Inductor``, ``Triton``, and ``Autotuning``.