Closed
Description
If I understand correctly, one needs to also set
torch.backends.quantized.engine = "fbgemm"
I tried to quantize a model without the missing step and got strange errors about certain operations not being supported on the FBGEMM backend. They go away with this step added.
cc @jerryzh168 @jianyuh @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen