@@ -53,6 +53,7 @@ The PyTorch 2.0 export quantization API looks like this:
53
53
.. code :: python
54
54
55
55
import torch
56
+ from torch._export import capture_pre_autograd_graph
56
57
class M (torch .nn .Module ):
57
58
def __init__ (self ):
58
59
super ().__init__ ()
@@ -66,7 +67,9 @@ The PyTorch 2.0 export quantization API looks like this:
66
67
m = M().eval()
67
68
68
69
# Step 1. program capture
69
- m = torch._dynamo.export(m, * example_inputs, aten_graph = True )
70
+ # NOTE : this API will be updated to torch.export API in the future, but the captured
71
+ # result shoud mostly stay the same
72
+ m = capture_pre_autograd_graph(m, * example_inputs)
70
73
# we get a model with aten ops
71
74
72
75
@@ -352,10 +355,13 @@ Here is how you can use ``torch.export`` to export the model:
352
355
353
356
.. code-block :: python
354
357
355
- import torch._dynamo as torchdynamo
358
+ from torch._export import capture_pre_autograd_graph
356
359
357
360
example_inputs = (torch.rand(2 , 3 , 224 , 224 ),)
358
- exported_model, _ = torchdynamo.export(model_to_quantize, * example_inputs, aten_graph = True , tracing_mode = " symbolic" )
361
+ exported_model, _ = capture_pre_autograd_graph(model_to_quantize, * example_inputs)
362
+
363
+
364
+ ``capture_pre_autograd_graph `` is a short term API, it will be updated to use the offical ``torch.export `` API when that is ready.
359
365
360
366
361
367
Import the Backend Specific Quantizer and Configure how to Quantize the Model
0 commit comments