You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/en/quantization/overview.md
+71-67Lines changed: 71 additions & 67 deletions
Original file line number
Diff line number
Diff line change
@@ -13,59 +13,98 @@ specific language governing permissions and limitations under the License.
13
13
14
14
# Quantization
15
15
16
-
Quantization techniques focus on representing data with less information while also trying to not lose too much accuracy. This often means converting a data type to represent the same information with fewer bits. For example, if your model weights are stored as 32-bit floating points and they're quantized to 16-bit floating points, this halves the model size which makes it easier to store and reduces memory-usage. Lower precision can also speedup inference because it takes less time to perform calculations with fewer bits.
16
+
Quantization focuses on representing data with fewer bits while also trying to preserve the precision of the original data. This often means converting a data type to represent the same information with fewer bits. For example, if your model weights are stored as 32-bit floating points and they're quantized to 16-bit floating points, this halves the model size which makes it easier to store and reduces memoryusage. Lower precision can also speedup inference because it takes less time to perform calculations with fewer bits.
17
17
18
-
<Tip>
18
+
Diffusers supports multiple quantization backends to make large diffusion models like [Flux](../api/pipelines/flux) more accessible. This guide shows how to use the [`~quantizers.PipelineQuantizationConfig`] class to quantize a pipeline during its initialization from a pretrained or non-quantized checkpoint.
19
19
20
-
Interested in adding a new quantization method to Diffusers? Refer to the [Contribute new quantization method guide](https://huggingface.co/docs/transformers/main/en/quantization/contribute) to learn more about adding a new quantization method.
20
+
## Pipeline-level quantization
21
21
22
-
</Tip>
22
+
There are two ways you can use [`~quantizers.PipelineQuantizationConfig`] depending on the level of control you want over the quantization specifications of each model in the pipeline.
23
23
24
-
<Tip>
24
+
- for more basic and simple use cases, you only need to define the `quant_backend`, `quant_kwargs`, and `components_to_quantize`
25
+
- for more granular quantization control, provide a `quant_mapping` that provides the quantization specifications for the individual model components
25
26
26
-
If you are new to the quantization field, we recommend you to check out these beginner-friendly courses about quantization in collaboration with DeepLearning.AI:
27
+
### Simple quantization
27
28
28
-
*[Quantization Fundamentals with Hugging Face](https://www.deeplearning.ai/short-courses/quantization-fundamentals-with-hugging-face/)
29
-
*[Quantization in Depth](https://www.deeplearning.ai/short-courses/quantization-in-depth/)
29
+
Initialize [`~quantizers.PipelineQuantizationConfig`] with the following parameters.
30
30
31
-
</Tip>
31
+
-`quant_backend` specifies which quantization backend to use. Currently supported backends include: `bitsandbytes_4bit`, `bitsandbytes_8bit`, `gguf`, `quanto`, and `torchao`.
32
+
-`quant_kwargs` contains the specific quantization arguments to use.
33
+
-`components_to_quantize` specifies which components of the pipeline to quantize. Typically, you should quantize the most compute intensive components like the transformer. The text encoder is another component to consider quantizing if a pipeline has more than one such as [`FluxPipeline`]. The example below quantizes the T5 text encoder in [`FluxPipeline`] while keeping the CLIP model intact.
32
34
33
-
## When to use what?
35
+
```py
36
+
import torch
37
+
from diffusers import DiffusionPipeline
38
+
from diffusers.quantizers import PipelineQuantizationConfig
34
39
35
-
Diffusers currently supports the following quantization methods.
[This resource](https://huggingface.co/docs/transformers/main/en/quantization/overview#when-to-use-what) provides a good overview of the pros and cons of different quantization techniques.
47
+
Pass the `pipeline_quant_config` to [`~DiffusionPipeline.from_pretrained`] to quantize the pipeline.
42
48
43
-
## Pipeline-level quantization
49
+
```py
50
+
pipe = DiffusionPipeline.from_pretrained(
51
+
"black-forest-labs/FLUX.1-dev",
52
+
quantization_config=pipeline_quant_config,
53
+
torch_dtype=torch.bfloat16,
54
+
).to("cuda")
55
+
56
+
image = pipe("photo of a cute dog").images[0]
57
+
```
44
58
45
-
Diffusers allows users to directly initialize pipelines from checkpoints that may contain quantized models ([example](https://huggingface.co/hf-internal-testing/flux.1-dev-nf4-pkg)). However, users may want to apply
46
-
quantization on-the-fly when initializing a pipeline from a pre-trained and non-quantized checkpoint. You can
47
-
do this with [`~quantizers.PipelineQuantizationConfig`].
59
+
### quant_mapping
48
60
49
-
Start by defining a `PipelineQuantizationConfig`:
61
+
The `quant_mapping` argument provides more flexible options for how to quantize each individual component in a pipeline, like combining different quantization backends.
62
+
63
+
Initialize [`~quantizers.PipelineQuantizationConfig`] and pass a `quant_mapping` to it. The `quant_mapping` allows you to specify the quantization options for each component in the pipeline such as the transformer and text encoder.
64
+
65
+
The example below uses two quantization backends, [`~quantizers.QuantoConfig`] and [`transformers.BitsAndBytesConfig`], for the transformer and text encoder.
50
66
51
67
```py
52
68
import torch
53
69
from diffusers import DiffusionPipeline
70
+
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
54
71
from diffusers.quantizers.quantization_config import QuantoConfig
55
72
from diffusers.quantizers import PipelineQuantizationConfig
56
-
from transformers import BitsAndBytesConfig
73
+
from transformers import BitsAndBytesConfigas TransformersBitsAndBytesConfig
Then pass it to [`~DiffusionPipeline.from_pretrained`] and run inference:
85
+
There is a separate bitsandbytes backend in [Transformers](https://huggingface.co/docs/transformers/main_classes/quantization#transformers.BitsAndBytesConfig). You need to import and use [`transformers.BitsAndBytesConfig`] for components that come from Transformers. For example, `text_encoder_2` in [`FluxPipeline`] is a [`~transformers.T5EncoderModel`] from Transformers so you need to use [`transformers.BitsAndBytesConfig`] instead of [`diffusers.BitsAndBytesConfig`].
86
+
87
+
> [!TIP]
88
+
> Use the [simple quantization](#simple-quantization) method above if you don't want to manage these distinct imports or aren't sure where each pipeline component comes from.
89
+
90
+
```py
91
+
import torch
92
+
from diffusers import DiffusionPipeline
93
+
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig
94
+
from diffusers.quantizers import PipelineQuantizationConfig
95
+
from transformers import BitsAndBytesConfig as TransformersBitsAndBytesConfig
- If you are new to quantization, we recommend checking out the following beginner-friendly courses in collaboration with DeepLearning.AI.
115
124
116
-
Below is a list of the supported quantization backends available in both `diffusers` and `transformers`:
125
+
-[Quantization Fundamentals with Hugging Face](https://www.deeplearning.ai/short-courses/quantization-fundamentals-with-hugging-face/)
126
+
-[Quantization in Depth](https://www.deeplearning.ai/short-courses/quantization-in-depth/)
117
127
118
-
*`bitsandbytes_4bit`
119
-
*`bitsandbytes_8bit`
120
-
*`gguf`
121
-
*`quanto`
122
-
*`torchao`
128
+
- Refer to the [Contribute new quantization method guide](https://huggingface.co/docs/transformers/main/en/quantization/contribute) if you're interested in adding a new quantization method.
123
129
130
+
- The Transformers quantization [Overview](https://huggingface.co/docs/transformers/quantization/overview#when-to-use-what) provides an overview of the pros and cons of different quantization backends.
124
131
125
-
Diffusion pipelines can have multiple text encoders. [`FluxPipeline`] has two, for example. It's
126
-
recommended to quantize the text encoders that are memory-intensive. Some examples include T5,
127
-
Llama, Gemma, etc. In the above example, you quantized the T5 model of [`FluxPipeline`] through
128
-
`text_encoder_2` while keeping the CLIP model intact (accessible through `text_encoder`).
132
+
- Read the [Exploring Quantization Backends in Diffusers](https://huggingface.co/blog/diffusers-quantization) blog post for a brief introduction to each quantization backend, how to choose a backend, and combining quantization with other memory optimizations.
0 commit comments