Skip to content

Static quantization tutorial missing a step #1235

Closed
@rfejgin

Description

@rfejgin

per_channel_quantized_model.qconfig = torch.quantization.get_default_qconfig('fbgemm')

If I understand correctly, one needs to also set
torch.backends.quantized.engine = "fbgemm"

I tried to quantize a model without the missing step and got strange errors about certain operations not being supported on the FBGEMM backend. They go away with this step added.

cc @jerryzh168 @jianyuh @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions