-
Notifications
You must be signed in to change notification settings - Fork 6k
[docs] Model cards #11112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[docs] Model cards #11112
Changes from all commits
13c4d5f
50639f6
0fe791e
26f0d19
8db9073
590c27a
b403cf6
6e955ac
df13fcc
ec5594c
0d3f911
ddab9b4
8f0c9df
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -13,150 +13,181 @@ | |
# limitations under the License. | ||
--> | ||
|
||
# CogVideoX | ||
|
||
<div class="flex flex-wrap space-x-1"> | ||
<img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/> | ||
<div style="float: right;"> | ||
<div class="flex flex-wrap space-x-1"> | ||
<a href="https://huggingface.co/docs/diffusers/main/en/tutorials/using_peft_for_inference" target="_blank" rel="noopener"> | ||
<img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/> | ||
</a> | ||
</div> | ||
</div> | ||
|
||
[CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer](https://huggingface.co/papers/2408.06072) from Tsinghua University & ZhipuAI, by Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, Da Yin, Xiaotao Gu, Yuxuan Zhang, Weihan Wang, Yean Cheng, Ting Liu, Bin Xu, Yuxiao Dong, Jie Tang. | ||
|
||
The abstract from the paper is: | ||
|
||
*We introduce CogVideoX, a large-scale diffusion transformer model designed for generating videos based on text prompts. To efficently model video data, we propose to levearge a 3D Variational Autoencoder (VAE) to compresses videos along both spatial and temporal dimensions. To improve the text-video alignment, we propose an expert transformer with the expert adaptive LayerNorm to facilitate the deep fusion between the two modalities. By employing a progressive training technique, CogVideoX is adept at producing coherent, long-duration videos characterized by significant motion. In addition, we develop an effectively text-video data processing pipeline that includes various data preprocessing strategies and a video captioning method. It significantly helps enhance the performance of CogVideoX, improving both generation quality and semantic alignment. Results show that CogVideoX demonstrates state-of-the-art performance across both multiple machine metrics and human evaluations. The model weight of CogVideoX-2B is publicly available at https://github.com/THUDM/CogVideo.* | ||
# CogVideoX | ||
|
||
<Tip> | ||
[CogVideoX](https://huggingface.co/papers/2408.06072) is a large diffusion transformer model - available in 2B and 5B parameters - designed to generate longer and more consistent videos from text. This model uses a 3D causal variational autoencoder to more efficiently process video data by reducing sequence length (and associated training compute) and preventing flickering in generated videos. An "expert" transformer with adaptive LayerNorm improves alignment between text and video, and 3D full attention helps accurately capture motion and time in generated videos. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is okay but I would perhaps tackle the removal of the abstract section in a separate PR. Also, this does add an additional overload of coming up with a description for the paper. I would like to avoid that for now. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think it'd be good to also tackle this now since for the new pipeline cards, we want to have a nice and complete example of what it should look like no? Good point that adding a description of the paper adds additional overload, but I think its necessary, since we want to give users a version of the abstract that is more accessible (meaning using common everyday language) versus academic (inspired by @asomoza 's comment here) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I am a bit spread thin on this one. So, I will go with what the team prefers. |
||
|
||
Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines. | ||
You can find all the original CogVideoX checkpoints under the [CogVideoX](https://huggingface.co/collections/THUDM/cogvideo-66c08e62f1685a3ade464cce) collection. | ||
|
||
</Tip> | ||
> [!TIP] | ||
> Click on the CogVideoX models in the right sidebar for more examples of other video generation tasks. | ||
|
||
This pipeline was contributed by [zRzRzRzRzRzRzR](https://github.com/zRzRzRzRzRzRzR). The original codebase can be found [here](https://huggingface.co/THUDM). The original weights can be found under [hf.co/THUDM](https://huggingface.co/THUDM). | ||
The example below demonstrates how to generate a video optimized for memory or inference speed. | ||
|
||
There are three official CogVideoX checkpoints for text-to-video and video-to-video. | ||
<hfoptions id="usage"> | ||
<hfoption id="memory"> | ||
|
||
| checkpoints | recommended inference dtype | | ||
|:---:|:---:| | ||
| [`THUDM/CogVideoX-2b`](https://huggingface.co/THUDM/CogVideoX-2b) | torch.float16 | | ||
| [`THUDM/CogVideoX-5b`](https://huggingface.co/THUDM/CogVideoX-5b) | torch.bfloat16 | | ||
| [`THUDM/CogVideoX1.5-5b`](https://huggingface.co/THUDM/CogVideoX1.5-5b) | torch.bfloat16 | | ||
Refer to the [Reduce memory usage](../../optimization/memory) guide for more details about the various memory saving techniques. | ||
|
||
There are two official CogVideoX checkpoints available for image-to-video. | ||
The quantized CogVideoX 5B model below requires ~16GB of VRAM. | ||
|
||
| checkpoints | recommended inference dtype | | ||
|:---:|:---:| | ||
| [`THUDM/CogVideoX-5b-I2V`](https://huggingface.co/THUDM/CogVideoX-5b-I2V) | torch.bfloat16 | | ||
| [`THUDM/CogVideoX-1.5-5b-I2V`](https://huggingface.co/THUDM/CogVideoX-1.5-5b-I2V) | torch.bfloat16 | | ||
```py | ||
import torch | ||
from diffusers import CogVideoXPipeline, AutoModel | ||
from diffusers.quantizers import PipelineQuantizationConfig | ||
from diffusers.hooks import apply_group_offloading | ||
from diffusers.utils import export_to_video | ||
|
||
For the CogVideoX 1.5 series: | ||
- Text-to-video (T2V) works best at a resolution of 1360x768 because it was trained with that specific resolution. | ||
- Image-to-video (I2V) works for multiple resolutions. The width can vary from 768 to 1360, but the height must be 768. The height/width must be divisible by 16. | ||
- Both T2V and I2V models support generation with 81 and 161 frames and work best at this value. Exporting videos at 16 FPS is recommended. | ||
# quantize weights to int8 with torchao | ||
pipeline_quant_config = PipelineQuantizationConfig( | ||
quant_backend="torchao", | ||
quant_kwargs={"quant_type": "int8wo"}, | ||
components_to_quantize=["transformer"] | ||
) | ||
|
||
There are two official CogVideoX checkpoints that support pose controllable generation (by the [Alibaba-PAI](https://huggingface.co/alibaba-pai) team). | ||
# fp8 layerwise weight-casting | ||
transformer = AutoModel.from_pretrained( | ||
"THUDM/CogVideoX-5b", | ||
subfolder="transformer", | ||
torch_dtype=torch.bfloat16 | ||
) | ||
transformer.enable_layerwise_casting( | ||
storage_dtype=torch.float8_e4m3fn, compute_dtype=torch.bfloat16 | ||
) | ||
|
||
| checkpoints | recommended inference dtype | | ||
|:---:|:---:| | ||
| [`alibaba-pai/CogVideoX-Fun-V1.1-2b-Pose`](https://huggingface.co/alibaba-pai/CogVideoX-Fun-V1.1-2b-Pose) | torch.bfloat16 | | ||
| [`alibaba-pai/CogVideoX-Fun-V1.1-5b-Pose`](https://huggingface.co/alibaba-pai/CogVideoX-Fun-V1.1-5b-Pose) | torch.bfloat16 | | ||
pipeline = CogVideoXPipeline.from_pretrained( | ||
"THUDM/CogVideoX-5b", | ||
transformer=transformer, | ||
quantization_config=pipeline_quant_config, | ||
torch_dtype=torch.bfloat16 | ||
) | ||
pipeline.to("cuda") | ||
|
||
# model-offloading | ||
pipeline.enable_model_cpu_offload() | ||
|
||
prompt = """ | ||
A detailed wooden toy ship with intricately carved masts and sails is seen gliding smoothly over a plush, blue carpet that mimics the waves of the sea. | ||
The ship's hull is painted a rich brown, with tiny windows. The carpet, soft and textured, provides a perfect backdrop, resembling an oceanic expanse. | ||
Surrounding the ship are various other toys and children's items, hinting at a playful environment. The scene captures the innocence and imagination of childhood, | ||
with the toy ship's journey symbolizing endless adventures in a whimsical, indoor setting. | ||
""" | ||
|
||
video = pipeline( | ||
prompt=prompt, | ||
guidance_scale=6, | ||
num_inference_steps=50 | ||
).frames[0] | ||
export_to_video(video, "output.mp4", fps=8) | ||
``` | ||
|
||
## Inference | ||
</hfoption> | ||
<hfoption id="inference speed"> | ||
|
||
Use [`torch.compile`](https://huggingface.co/docs/diffusers/main/en/tutorials/fast_diffusion#torchcompile) to reduce the inference latency. | ||
[Compilation](../../optimization/fp16#torchcompile) is slow the first time but subsequent calls to the pipeline are faster. | ||
|
||
First, load the pipeline: | ||
The average inference time with torch.compile on a 80GB A100 is 76.27 seconds compared to 96.89 seconds for an uncompiled model. | ||
|
||
```python | ||
```py | ||
import torch | ||
from diffusers import CogVideoXPipeline, CogVideoXImageToVideoPipeline | ||
from diffusers.utils import export_to_video,load_image | ||
pipe = CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-5b").to("cuda") # or "THUDM/CogVideoX-2b" | ||
``` | ||
|
||
If you are using the image-to-video pipeline, load it as follows: | ||
from diffusers import CogVideoXPipeline | ||
from diffusers.utils import export_to_video | ||
|
||
```python | ||
pipe = CogVideoXImageToVideoPipeline.from_pretrained("THUDM/CogVideoX-5b-I2V").to("cuda") | ||
``` | ||
pipeline = CogVideoXPipeline.from_pretrained( | ||
"THUDM/CogVideoX-2b", | ||
torch_dtype=torch.float16 | ||
).to("cuda") | ||
|
||
Then change the memory layout of the pipelines `transformer` component to `torch.channels_last`: | ||
# torch.compile | ||
pipeline.transformer.to(memory_format=torch.channels_last) | ||
pipeline.transformer = torch.compile( | ||
pipeline.transformer, mode="max-autotune", fullgraph=True | ||
) | ||
|
||
```python | ||
pipe.transformer.to(memory_format=torch.channels_last) | ||
prompt = """ | ||
A detailed wooden toy ship with intricately carved masts and sails is seen gliding smoothly over a plush, blue carpet that mimics the waves of the sea. | ||
The ship's hull is painted a rich brown, with tiny windows. The carpet, soft and textured, provides a perfect backdrop, resembling an oceanic expanse. | ||
Surrounding the ship are various other toys and children's items, hinting at a playful environment. The scene captures the innocence and imagination of childhood, | ||
with the toy ship's journey symbolizing endless adventures in a whimsical, indoor setting. | ||
""" | ||
|
||
video = pipeline( | ||
prompt=prompt, | ||
guidance_scale=6, | ||
num_inference_steps=50 | ||
).frames[0] | ||
export_to_video(video, "output.mp4", fps=8) | ||
``` | ||
|
||
Compile the components and run inference: | ||
</hfoption> | ||
</hfoptions> | ||
|
||
```python | ||
pipe.transformer = torch.compile(pipeline.transformer, mode="max-autotune", fullgraph=True) | ||
## Notes | ||
|
||
# CogVideoX works well with long and well-described prompts | ||
prompt = "A panda, dressed in a small, red jacket and a tiny hat, sits on a wooden stool in a serene bamboo forest. The panda's fluffy paws strum a miniature acoustic guitar, producing soft, melodic tunes. Nearby, a few other pandas gather, watching curiously and some clapping in rhythm. Sunlight filters through the tall bamboo, casting a gentle glow on the scene. The panda's face is expressive, showing concentration and joy as it plays. The background includes a small, flowing stream and vibrant green foliage, enhancing the peaceful and magical atmosphere of this unique musical performance." | ||
video = pipe(prompt=prompt, guidance_scale=6, num_inference_steps=50).frames[0] | ||
``` | ||
- CogVideoX supports LoRAs with [`~loaders.CogVideoXLoraLoaderMixin.load_lora_weights`]. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do we need this separate note besides having the LoRA marker button at the top of the page? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think it'd be nice to have a easy copy/paste example for users who want to use this specific model, will fold under collapsible section as suggested. I also added a link to the LoRA marker button at the top :) |
||
|
||
The [T2V benchmark](https://gist.github.com/a-r-r-o-w/5183d75e452a368fd17448fcc810bd3f) results on an 80GB A100 machine are: | ||
<details> | ||
<summary>Show example code</summary> | ||
|
||
``` | ||
Without torch.compile(): Average inference time: 96.89 seconds. | ||
With torch.compile(): Average inference time: 76.27 seconds. | ||
sayakpaul marked this conversation as resolved.
Show resolved
Hide resolved
|
||
``` | ||
```py | ||
import torch | ||
from diffusers import CogVideoXPipeline | ||
from diffusers.hooks import apply_group_offloading | ||
from diffusers.utils import export_to_video | ||
|
||
### Memory optimization | ||
pipeline = CogVideoXPipeline.from_pretrained( | ||
"THUDM/CogVideoX-5b", | ||
torch_dtype=torch.bfloat16 | ||
) | ||
pipeline.to("cuda") | ||
|
||
CogVideoX-2b requires about 19 GB of GPU memory to decode 49 frames (6 seconds of video at 8 FPS) with output resolution 720x480 (W x H), which makes it not possible to run on consumer GPUs or free-tier T4 Colab. The following memory optimizations could be used to reduce the memory footprint. For replication, you can refer to [this](https://gist.github.com/a-r-r-o-w/3959a03f15be5c9bd1fe545b09dfcc93) script. | ||
# load LoRA weights | ||
pipeline.load_lora_weights("finetrainers/CogVideoX-1.5-crush-smol-v0", adapter_name="crush-lora") | ||
pipeline.set_adapters("crush-lora", 0.9) | ||
|
||
- `pipe.enable_model_cpu_offload()`: | ||
- Without enabling cpu offloading, memory usage is `33 GB` | ||
- With enabling cpu offloading, memory usage is `19 GB` | ||
- `pipe.enable_sequential_cpu_offload()`: | ||
- Similar to `enable_model_cpu_offload` but can significantly reduce memory usage at the cost of slow inference | ||
- When enabled, memory usage is under `4 GB` | ||
- `pipe.vae.enable_tiling()`: | ||
- With enabling cpu offloading and tiling, memory usage is `11 GB` | ||
- `pipe.vae.enable_slicing()` | ||
# model-offloading | ||
pipeline.enable_model_cpu_offload() | ||
|
||
## Quantization | ||
prompt = """ | ||
PIKA_CRUSH A large metal cylinder is seen pressing down on a pile of Oreo cookies, flattening them as if they were under a hydraulic press. | ||
""" | ||
negative_prompt = "inconsistent motion, blurry motion, worse quality, degenerate outputs, deformed outputs" | ||
|
||
Quantization helps reduce the memory requirements of very large models by storing model weights in a lower precision data type. However, quantization may have varying impact on video quality depending on the video model. | ||
video = pipeline( | ||
prompt=prompt, | ||
negative_prompt=negative_prompt, | ||
num_frames=81, | ||
height=480, | ||
width=768, | ||
num_inference_steps=50 | ||
).frames[0] | ||
export_to_video(video, "output.mp4", fps=16) | ||
``` | ||
|
||
Refer to the [Quantization](../../quantization/overview) overview to learn more about supported quantization backends and selecting a quantization backend that supports your use case. The example below demonstrates how to load a quantized [`CogVideoXPipeline`] for inference with bitsandbytes. | ||
</details> | ||
|
||
```py | ||
import torch | ||
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, CogVideoXTransformer3DModel, CogVideoXPipeline | ||
from diffusers.utils import export_to_video | ||
from transformers import BitsAndBytesConfig as BitsAndBytesConfig, T5EncoderModel | ||
- The text-to-video (T2V) checkpoints work best with a resolution of 1360x768 because that was the resolution it was pretrained on. | ||
|
||
quant_config = BitsAndBytesConfig(load_in_8bit=True) | ||
text_encoder_8bit = T5EncoderModel.from_pretrained( | ||
"THUDM/CogVideoX-2b", | ||
subfolder="text_encoder", | ||
quantization_config=quant_config, | ||
torch_dtype=torch.float16, | ||
) | ||
- The image-to-video (I2V) checkpoints work with multiple resolutions. The width can vary from 768 to 1360, but the height must be 758. Both height and width must be divisible by 16. | ||
|
||
quant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True) | ||
transformer_8bit = CogVideoXTransformer3DModel.from_pretrained( | ||
"THUDM/CogVideoX-2b", | ||
subfolder="transformer", | ||
quantization_config=quant_config, | ||
torch_dtype=torch.float16, | ||
) | ||
- Both T2V and I2V checkpoints work best with 81 and 161 frames. It is recommended to export the generated video at 16fps. | ||
|
||
pipeline = CogVideoXPipeline.from_pretrained( | ||
"THUDM/CogVideoX-2b", | ||
text_encoder=text_encoder_8bit, | ||
transformer=transformer_8bit, | ||
torch_dtype=torch.float16, | ||
device_map="balanced", | ||
) | ||
|
||
prompt = "A detailed wooden toy ship with intricately carved masts and sails is seen gliding smoothly over a plush, blue carpet that mimics the waves of the sea. The ship's hull is painted a rich brown, with tiny windows. The carpet, soft and textured, provides a perfect backdrop, resembling an oceanic expanse. Surrounding the ship are various other toys and children's items, hinting at a playful environment. The scene captures the innocence and imagination of childhood, with the toy ship's journey symbolizing endless adventures in a whimsical, indoor setting." | ||
video = pipeline(prompt=prompt, guidance_scale=6, num_inference_steps=50).frames[0] | ||
export_to_video(video, "ship.mp4", fps=8) | ||
``` | ||
- Refer to the table below to view memory usage when various memory-saving techniques are enabled. | ||
|
||
| method | memory usage (enabled) | memory usage (disabled) | | ||
|---|---|---| | ||
| enable_model_cpu_offload | 19GB | 33GB | | ||
| enable_sequential_cpu_offload | <4GB | ~33GB (very slow inference speed) | | ||
| enable_tiling | 11GB (with enable_model_cpu_offload) | --- | | ||
|
||
## CogVideoXPipeline | ||
|
||
[[autodoc]] CogVideoXPipeline | ||
|
Uh oh!
There was an error while loading. Please reload this page.