Skip to content

Commit 7e504c1

Browse files
committed
Update capture API
1 parent ef4e7e6 commit 7e504c1

File tree

1 file changed

+9
-3
lines changed

1 file changed

+9
-3
lines changed

prototype_source/pt2e_quant_ptq_static.rst

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,7 @@ The PyTorch 2.0 export quantization API looks like this:
5353
.. code:: python
5454
5555
import torch
56+
from torch._export import capture_pre_autograd_graph
5657
class M(torch.nn.Module):
5758
def __init__(self):
5859
super().__init__()
@@ -66,7 +67,9 @@ The PyTorch 2.0 export quantization API looks like this:
6667
m = M().eval()
6768
6869
# Step 1. program capture
69-
m = torch._dynamo.export(m, *example_inputs, aten_graph=True)
70+
# NOTE: this API will be updated to torch.export API in the future, but the captured
71+
# result shoud mostly stay the same
72+
m = capture_pre_autograd_graph(m, *example_inputs)
7073
# we get a model with aten ops
7174
7275
@@ -352,10 +355,13 @@ Here is how you can use ``torch.export`` to export the model:
352355

353356
.. code-block:: python
354357
355-
import torch._dynamo as torchdynamo
358+
from torch._export import capture_pre_autograd_graph
356359
357360
example_inputs = (torch.rand(2, 3, 224, 224),)
358-
exported_model, _ = torchdynamo.export(model_to_quantize, *example_inputs, aten_graph=True, tracing_mode="symbolic")
361+
exported_model, _ = capture_pre_autograd_graph(model_to_quantize, *example_inputs)
362+
363+
364+
``capture_pre_autograd_graph`` is a short term API, it will be updated to use the offical ``torch.export`` API when that is ready.
359365

360366

361367
Import the Backend Specific Quantizer and Configure how to Quantize the Model

0 commit comments

Comments
 (0)