Skip to content

Commit 8b29764

Browse files
jonlucaa-r-r-o-wremarkablemarkhlkykakukakujirori
authored
sync head (#5)
* Raise warning and round down if Wan num_frames is not 4k + 1 (huggingface#11167) * update * raise warning and round to nearest multiple of scale factor * [Docs] Fix environment variables in `installation.md` (huggingface#11179) * Add `latents_mean` and `latents_std` to `SDXLLongPromptWeightingPipeline` (huggingface#11034) * Bug fix in LTXImageToVideoPipeline.prepare_latents() when latents is already set (huggingface#10918) * Bug fix in ltx * Assume packed latents. --------- Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by: YiYi Xu <yixu310@gmail.com> * [tests] no hard-coded cuda (huggingface#11186) no cuda only * [WIP] Add Wan Video2Video (huggingface#11053) * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * map BACKEND_RESET_MAX_MEMORY_ALLOCATED to reset_peak_memory_stats on XPU (huggingface#11191) Signed-off-by: YAO Matrix <matrix.yao@intel.com> * fix autocast (huggingface#11190) Signed-off-by: jiqing-feng <jiqing.feng@intel.com> * fix: for checking mandatory and optional pipeline components (huggingface#11189) fix: optional componentes verification on load * remove unnecessary call to `F.pad` (huggingface#10620) * rewrite memory count without implicitly using dimensions by @ic-synth * replace F.pad by built-in padding in Conv3D * in-place sums to reduce memory allocations * fixed trailing whitespace * file reformatted * in-place sums * simpler in-place expressions * removed in-place sum, may affect backward propagation logic * removed in-place sum, may affect backward propagation logic * removed in-place sum, may affect backward propagation logic * reverted change * allow models to run with a user-provided dtype map instead of a single dtype (huggingface#10301) * allow models to run with a user-provided dtype map instead of a single dtype * make style * Add warning, change `_` to `default` * make style * add test * handle shared tensors * remove warning --------- Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> * [tests] HunyuanDiTControlNetPipeline inference precision issue on XPU (huggingface#11197) * add xpu part * fix more cases * remove some cases * no canny * format fix * Revert `save_model` in ModelMixin save_pretrained and use safe_serialization=False in test (huggingface#11196) * [docs] `torch_dtype` map (huggingface#11194) * Fix enable_sequential_cpu_offload in CogView4Pipeline (huggingface#11195) * Fix enable_sequential_cpu_offload in CogView4Pipeline * make fix-copies * SchedulerMixin from_pretrained and ConfigMixin Self type annotation (huggingface#11192) * Update import_utils.py (huggingface#10329) added onnxruntime-vitisai for custom build onnxruntime pkg * Add CacheMixin to Wan and LTX Transformers (huggingface#11187) * update * update * update * feat: [Community Pipeline] - FaithDiff Stable Diffusion XL Pipeline (huggingface#11188) * feat: [Community Pipeline] - FaithDiff Stable Diffusion XL Pipeline for Image SR. * added pipeline * [Model Card] standardize advanced diffusion training sdxl lora (huggingface#7615) * model card gen code * push modelcard creation * remove optional from params * add import * add use_dora check * correct lora var use in tags * make style && make quality --------- Co-authored-by: Aryan <aryan@huggingface.co> Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> * Change KolorsPipeline LoRA Loader to StableDiffusion (huggingface#11198) Change LoRA Loader to StableDiffusion Replace the SDXL LoRA Loader Mixin inheritance with the StableDiffusion one * Update Style Bot workflow (huggingface#11202) update style bot workflow --------- Signed-off-by: YAO Matrix <matrix.yao@intel.com> Signed-off-by: jiqing-feng <jiqing.feng@intel.com> Co-authored-by: Aryan <aryan@huggingface.co> Co-authored-by: Mark <remarkablemark@users.noreply.github.com> Co-authored-by: hlky <hlky@hlky.ac> Co-authored-by: kakukakujirori <63725741+kakukakujirori@users.noreply.github.com> Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by: YiYi Xu <yixu310@gmail.com> Co-authored-by: Fanli Lin <fanli.lin@intel.com> Co-authored-by: Yao Matrix <matrix.yao@intel.com> Co-authored-by: jiqing-feng <jiqing.feng@intel.com> Co-authored-by: Eliseu Silva <elismasilva@gmail.com> Co-authored-by: Bruno Magalhaes <bruno.magalhaes@synthesia.io> Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> Co-authored-by: lakshay sharma <31830611+Lakshaysharma048@users.noreply.github.com> Co-authored-by: Abhipsha Das <ad6489@nyu.edu> Co-authored-by: Basile Lewandowski <basile.lewan@gmail.com> Co-authored-by: célina <hanouticelina@gmail.com>
1 parent 6dd087f commit 8b29764

34 files changed

+3501
-116
lines changed

.github/workflows/pr_style_bot.yml

Lines changed: 0 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -13,39 +13,5 @@ jobs:
1313
uses: huggingface/huggingface_hub/.github/workflows/style-bot-action.yml@main
1414
with:
1515
python_quality_dependencies: "[quality]"
16-
pre_commit_script_name: "Download and Compare files from the main branch"
17-
pre_commit_script: |
18-
echo "Downloading the files from the main branch"
19-
20-
curl -o main_Makefile https://raw.githubusercontent.com/huggingface/diffusers/main/Makefile
21-
curl -o main_setup.py https://raw.githubusercontent.com/huggingface/diffusers/refs/heads/main/setup.py
22-
curl -o main_check_doc_toc.py https://raw.githubusercontent.com/huggingface/diffusers/refs/heads/main/utils/check_doc_toc.py
23-
24-
echo "Compare the files and raise error if needed"
25-
26-
diff_failed=0
27-
if ! diff -q main_Makefile Makefile; then
28-
echo "Error: The Makefile has changed. Please ensure it matches the main branch."
29-
diff_failed=1
30-
fi
31-
32-
if ! diff -q main_setup.py setup.py; then
33-
echo "Error: The setup.py has changed. Please ensure it matches the main branch."
34-
diff_failed=1
35-
fi
36-
37-
if ! diff -q main_check_doc_toc.py utils/check_doc_toc.py; then
38-
echo "Error: The utils/check_doc_toc.py has changed. Please ensure it matches the main branch."
39-
diff_failed=1
40-
fi
41-
42-
if [ $diff_failed -eq 1 ]; then
43-
echo "❌ Error happened as we detected changes in the files that should not be changed ❌"
44-
exit 1
45-
fi
46-
47-
echo "No changes in the files. Proceeding..."
48-
rm -rf main_Makefile main_setup.py main_check_doc_toc.py
49-
style_command: "make style && make quality"
5016
secrets:
5117
bot_token: ${{ secrets.GITHUB_TOKEN }}

docs/source/en/api/pipelines/wan.md

Lines changed: 44 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -133,6 +133,46 @@ output = pipe(
133133
export_to_video(output, "wan-i2v.mp4", fps=16)
134134
```
135135

136+
### Video to Video Generation
137+
138+
```python
139+
import torch
140+
from diffusers.utils import load_video, export_to_video
141+
from diffusers import AutoencoderKLWan, WanVideoToVideoPipeline, UniPCMultistepScheduler
142+
143+
# Available models: Wan-AI/Wan2.1-T2V-14B-Diffusers, Wan-AI/Wan2.1-T2V-1.3B-Diffusers
144+
model_id = "Wan-AI/Wan2.1-T2V-1.3B-Diffusers"
145+
vae = AutoencoderKLWan.from_pretrained(
146+
model_id, subfolder="vae", torch_dtype=torch.float32
147+
)
148+
pipe = WanVideoToVideoPipeline.from_pretrained(
149+
model_id, vae=vae, torch_dtype=torch.bfloat16
150+
)
151+
flow_shift = 3.0 # 5.0 for 720P, 3.0 for 480P
152+
pipe.scheduler = UniPCMultistepScheduler.from_config(
153+
pipe.scheduler.config, flow_shift=flow_shift
154+
)
155+
# change to pipe.to("cuda") if you have sufficient VRAM
156+
pipe.enable_model_cpu_offload()
157+
158+
prompt = "A robot standing on a mountain top. The sun is setting in the background"
159+
negative_prompt = "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards"
160+
video = load_video(
161+
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/hiker.mp4"
162+
)
163+
output = pipe(
164+
video=video,
165+
prompt=prompt,
166+
negative_prompt=negative_prompt,
167+
height=480,
168+
width=512,
169+
guidance_scale=7.0,
170+
strength=0.7,
171+
).frames[0]
172+
173+
export_to_video(output, "wan-v2v.mp4", fps=16)
174+
```
175+
136176
## Memory Optimizations for Wan 2.1
137177

138178
Base inference with the large 14B Wan 2.1 models can take up to 35GB of VRAM when generating videos at 720p resolution. We'll outline a few memory optimizations we can apply to reduce the VRAM required to run the model.
@@ -323,7 +363,7 @@ import numpy as np
323363
from diffusers import AutoencoderKLWan, WanTransformer3DModel, WanImageToVideoPipeline
324364
from diffusers.hooks.group_offloading import apply_group_offloading
325365
from diffusers.utils import export_to_video, load_image
326-
from transformers import UMT5EncoderModel, CLIPVisionMode
366+
from transformers import UMT5EncoderModel, CLIPVisionModel
327367

328368
model_id = "Wan-AI/Wan2.1-I2V-14B-720P-Diffusers"
329369
image_encoder = CLIPVisionModel.from_pretrained(
@@ -356,7 +396,7 @@ prompt = (
356396
"An astronaut hatching from an egg, on the surface of the moon, the darkness and depth of space realised in "
357397
"the background. High quality, ultrarealistic detail and breath-taking movie-like camera shot."
358398
)
359-
negative_prompt = "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards
399+
negative_prompt = "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards"
360400
num_frames = 33
361401

362402
output = pipe(
@@ -372,7 +412,7 @@ output = pipe(
372412
export_to_video(output, "wan-i2v.mp4", fps=16)
373413
```
374414

375-
### Using a Custom Scheduler
415+
## Using a Custom Scheduler
376416

377417
Wan can be used with many different schedulers, each with their own benefits regarding speed and generation quality. By default, Wan uses the `UniPCMultistepScheduler(prediction_type="flow_prediction", use_flow_sigmas=True, flow_shift=3.0)` scheduler. You can use a different scheduler as follows:
378418

@@ -403,7 +443,7 @@ transformer = WanTransformer3DModel.from_single_file(ckpt_path, torch_dtype=torc
403443
pipe = WanPipeline.from_pretrained("Wan-AI/Wan2.1-T2V-1.3B-Diffusers", transformer=transformer)
404444
```
405445

406-
## Recommendations for Inference:
446+
## Recommendations for Inference
407447
- Keep `AutencoderKLWan` in `torch.float32` for better decoding quality.
408448
- `num_frames` should satisfy the following constraint: `(num_frames - 1) % 4 == 0`
409449
- For smaller resolution videos, try lower values of `shift` (between `2.0` to `5.0`) in the [Scheduler](https://huggingface.co/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler.shift). For larger resolution videos, try higher values (between `7.0` and `12.0`). The default value is `3.0` for Wan.

docs/source/en/installation.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -161,10 +161,10 @@ Your Python environment will find the `main` version of 🤗 Diffusers on the ne
161161

162162
Model weights and files are downloaded from the Hub to a cache which is usually your home directory. You can change the cache location by specifying the `HF_HOME` or `HUGGINFACE_HUB_CACHE` environment variables or configuring the `cache_dir` parameter in methods like [`~DiffusionPipeline.from_pretrained`].
163163

164-
Cached files allow you to run 🤗 Diffusers offline. To prevent 🤗 Diffusers from connecting to the internet, set the `HF_HUB_OFFLINE` environment variable to `True` and 🤗 Diffusers will only load previously downloaded files in the cache.
164+
Cached files allow you to run 🤗 Diffusers offline. To prevent 🤗 Diffusers from connecting to the internet, set the `HF_HUB_OFFLINE` environment variable to `1` and 🤗 Diffusers will only load previously downloaded files in the cache.
165165

166166
```shell
167-
export HF_HUB_OFFLINE=True
167+
export HF_HUB_OFFLINE=1
168168
```
169169

170170
For more details about managing and cleaning the cache, take a look at the [caching](https://huggingface.co/docs/huggingface_hub/guides/manage-cache) guide.
@@ -179,14 +179,16 @@ Telemetry is only sent when loading models and pipelines from the Hub,
179179
and it is not collected if you're loading local files.
180180

181181
We understand that not everyone wants to share additional information,and we respect your privacy.
182-
You can disable telemetry collection by setting the `DISABLE_TELEMETRY` environment variable from your terminal:
182+
You can disable telemetry collection by setting the `HF_HUB_DISABLE_TELEMETRY` environment variable from your terminal:
183183

184184
On Linux/MacOS:
185+
185186
```bash
186-
export DISABLE_TELEMETRY=YES
187+
export HF_HUB_DISABLE_TELEMETRY=1
187188
```
188189

189190
On Windows:
191+
190192
```bash
191-
set DISABLE_TELEMETRY=YES
193+
set HF_HUB_DISABLE_TELEMETRY=1
192194
```

docs/source/en/using-diffusers/loading.md

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -95,6 +95,23 @@ Use the Space below to gauge a pipeline's memory requirements before you downloa
9595
></iframe>
9696
</div>
9797

98+
### Specifying Component-Specific Data Types
99+
100+
You can customize the data types for individual sub-models by passing a dictionary to the `torch_dtype` parameter. This allows you to load different components of a pipeline in different floating point precisions. For instance, if you want to load the transformer with `torch.bfloat16` and all other components with `torch.float16`, you can pass a dictionary mapping:
101+
102+
```python
103+
from diffusers import HunyuanVideoPipeline
104+
import torch
105+
106+
pipe = HunyuanVideoPipeline.from_pretrained(
107+
"hunyuanvideo-community/HunyuanVideo",
108+
torch_dtype={'transformer': torch.bfloat16, 'default': torch.float16},
109+
)
110+
print(pipe.transformer.dtype, pipe.vae.dtype) # (torch.bfloat16, torch.float16)
111+
```
112+
113+
If a component is not explicitly specified in the dictionary and no `default` is provided, it will be loaded with `torch.float32`.
114+
98115
### Local pipeline
99116

100117
To load a pipeline locally, use [git-lfs](https://git-lfs.github.com/) to manually download a checkpoint to your local disk.

examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py

Lines changed: 32 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -71,6 +71,7 @@
7171
convert_unet_state_dict_to_peft,
7272
is_wandb_available,
7373
)
74+
from diffusers.utils.hub_utils import load_or_create_model_card, populate_model_card
7475
from diffusers.utils.import_utils import is_xformers_available
7576
from diffusers.utils.torch_utils import is_compiled_module
7677

@@ -101,7 +102,7 @@ def determine_scheduler_type(pretrained_model_name_or_path, revision):
101102
def save_model_card(
102103
repo_id: str,
103104
use_dora: bool,
104-
images=None,
105+
images: list = None,
105106
base_model: str = None,
106107
train_text_encoder=False,
107108
train_text_encoder_ti=False,
@@ -111,20 +112,17 @@ def save_model_card(
111112
repo_folder=None,
112113
vae_path=None,
113114
):
114-
img_str = "widget:\n"
115115
lora = "lora" if not use_dora else "dora"
116-
for i, image in enumerate(images):
117-
image.save(os.path.join(repo_folder, f"image_{i}.png"))
118-
img_str += f"""
119-
- text: '{validation_prompt if validation_prompt else ' ' }'
120-
output:
121-
url:
122-
"image_{i}.png"
123-
"""
124-
if not images:
125-
img_str += f"""
126-
- text: '{instance_prompt}'
127-
"""
116+
117+
widget_dict = []
118+
if images is not None:
119+
for i, image in enumerate(images):
120+
image.save(os.path.join(repo_folder, f"image_{i}.png"))
121+
widget_dict.append(
122+
{"text": validation_prompt if validation_prompt else " ", "output": {"url": f"image_{i}.png"}}
123+
)
124+
else:
125+
widget_dict.append({"text": instance_prompt})
128126
embeddings_filename = f"{repo_folder}_emb"
129127
instance_prompt_webui = re.sub(r"<s\d+>", "", re.sub(r"<s\d+>", embeddings_filename, instance_prompt, count=1))
130128
ti_keys = ", ".join(f'"{match}"' for match in re.findall(r"<s\d+>", instance_prompt))
@@ -169,23 +167,7 @@ def save_model_card(
169167
to trigger concept `{key}` → use `{tokens}` in your prompt \n
170168
"""
171169

172-
yaml = f"""---
173-
tags:
174-
- stable-diffusion-xl
175-
- stable-diffusion-xl-diffusers
176-
- diffusers-training
177-
- text-to-image
178-
- diffusers
179-
- {lora}
180-
- template:sd-lora
181-
{img_str}
182-
base_model: {base_model}
183-
instance_prompt: {instance_prompt}
184-
license: openrail++
185-
---
186-
"""
187-
188-
model_card = f"""
170+
model_description = f"""
189171
# SDXL LoRA DreamBooth - {repo_id}
190172
191173
<Gallery />
@@ -234,8 +216,25 @@ def save_model_card(
234216
235217
{license}
236218
"""
237-
with open(os.path.join(repo_folder, "README.md"), "w") as f:
238-
f.write(yaml + model_card)
219+
model_card = load_or_create_model_card(
220+
repo_id_or_path=repo_id,
221+
from_training=True,
222+
license="openrail++",
223+
base_model=base_model,
224+
prompt=instance_prompt,
225+
model_description=model_description,
226+
widget=widget_dict,
227+
)
228+
tags = [
229+
"text-to-image",
230+
"stable-diffusion-xl",
231+
"stable-diffusion-xl-diffusers",
232+
"text-to-image",
233+
"diffusers",
234+
lora,
235+
"template:sd-lora",
236+
]
237+
model_card = populate_model_card(model_card, tags=tags)
239238

240239

241240
def log_validation(

examples/community/README.md

Lines changed: 101 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ PIXART-α Controlnet pipeline | Implementation of the controlnet model for pixar
8585
| Stable Diffusion XL Attentive Eraser Pipeline |[[AAAI2025 Oral] Attentive Eraser](https://github.com/Anonym0u3/AttentiveEraser) is a novel tuning-free method that enhances object removal capabilities in pre-trained diffusion models.|[Stable Diffusion XL Attentive Eraser Pipeline](#stable-diffusion-xl-attentive-eraser-pipeline)|-|[Wenhao Sun](https://github.com/Anonym0u3) and [Benlei Cui](https://github.com/Benny079)|
8686
| Perturbed-Attention Guidance |StableDiffusionPAGPipeline is a modification of StableDiffusionPipeline to support Perturbed-Attention Guidance (PAG).|[Perturbed-Attention Guidance](#perturbed-attention-guidance)|[Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/perturbed_attention_guidance.ipynb)|[Hyoungwon Cho](https://github.com/HyoungwonCho)|
8787
| CogVideoX DDIM Inversion Pipeline | Implementation of DDIM inversion and guided attention-based editing denoising process on CogVideoX. | [CogVideoX DDIM Inversion Pipeline](#cogvideox-ddim-inversion-pipeline) | - | [LittleNyima](https://github.com/LittleNyima) |
88-
88+
| FaithDiff Stable Diffusion XL Pipeline | Implementation of [(CVPR 2025) FaithDiff: Unleashing Diffusion Priors for Faithful Image Super-resolutionUnleashing Diffusion Priors for Faithful Image Super-resolution](https://arxiv.org/abs/2411.18824) - FaithDiff is a faithful image super-resolution method that leverages latent diffusion models by actively adapting the diffusion prior and jointly fine-tuning its components (encoder and diffusion model) with an alignment module to ensure high fidelity and structural consistency. | [FaithDiff Stable Diffusion XL Pipeline](#faithdiff-stable-diffusion-xl-pipeline) | [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/jychen9811/FaithDiff) | [Junyang Chen, Jinshan Pan, Jiangxin Dong, IMAG Lab, (Adapted by Eliseu Silva)](https://github.com/JyChen9811/FaithDiff) |
8989
To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
9090

9191
```py
@@ -5334,3 +5334,103 @@ output = pipeline_for_inversion(
53345334
pipeline.export_latents_to_video(output.inverse_latents[-1], "path/to/inverse_video.mp4", fps=8)
53355335
pipeline.export_latents_to_video(output.recon_latents[-1], "path/to/recon_video.mp4", fps=8)
53365336
```
5337+
# FaithDiff Stable Diffusion XL Pipeline
5338+
5339+
[Project](https://jychen9811.github.io/FaithDiff_page/) / [GitHub](https://github.com/JyChen9811/FaithDiff/)
5340+
5341+
This the implementation of the FaithDiff pipeline for SDXL, adapted to use the HuggingFace Diffusers.
5342+
5343+
For more details see the project links above.
5344+
5345+
## Example Usage
5346+
5347+
This example upscale and restores a low-quality image. The input image has a resolution of 512x512 and will be upscaled at a scale of 2x, to a final resolution of 1024x1024. It is possible to upscale to a larger scale, but it is recommended that the input image be at least 1024x1024 in these cases. To upscale this image by 4x, for example, it would be recommended to re-input the result into a new 2x processing, thus performing progressive scaling.
5348+
5349+
````py
5350+
import random
5351+
import numpy as np
5352+
import torch
5353+
from diffusers import DiffusionPipeline, AutoencoderKL, UniPCMultistepScheduler
5354+
from huggingface_hub import hf_hub_download
5355+
from diffusers.utils import load_image
5356+
from PIL import Image
5357+
5358+
device = "cuda"
5359+
dtype = torch.float16
5360+
MAX_SEED = np.iinfo(np.int32).max
5361+
5362+
# Download weights for additional unet layers
5363+
model_file = hf_hub_download(
5364+
"jychen9811/FaithDiff",
5365+
filename="FaithDiff.bin", local_dir="./proc_data/faithdiff", local_dir_use_symlinks=False
5366+
)
5367+
5368+
# Initialize the models and pipeline
5369+
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=dtype)
5370+
5371+
model_id = "SG161222/RealVisXL_V4.0"
5372+
pipe = DiffusionPipeline.from_pretrained(
5373+
model_id,
5374+
torch_dtype=dtype,
5375+
vae=vae,
5376+
unet=None, #<- Do not load with original model.
5377+
custom_pipeline="pipeline_faithdiff_stable_diffusion_xl",
5378+
use_safetensors=True,
5379+
variant="fp16",
5380+
).to(device)
5381+
5382+
# Here we need use pipeline internal unet model
5383+
pipe.unet = pipe.unet_model.from_pretrained(model_id, subfolder="unet", variant="fp16", use_safetensors=True)
5384+
5385+
# Load aditional layers to the model
5386+
pipe.unet.load_additional_layers(weight_path="proc_data/faithdiff/FaithDiff.bin", dtype=dtype)
5387+
5388+
# Enable vae tiling
5389+
pipe.set_encoder_tile_settings()
5390+
pipe.enable_vae_tiling()
5391+
5392+
# Optimization
5393+
pipe.enable_model_cpu_offload()
5394+
5395+
# Set selected scheduler
5396+
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
5397+
5398+
#input params
5399+
prompt = "The image features a woman in her 55s with blonde hair and a white shirt, smiling at the camera. She appears to be in a good mood and is wearing a white scarf around her neck. "
5400+
upscale = 2 # scale here
5401+
start_point = "lr" # or "noise"
5402+
latent_tiled_overlap = 0.5
5403+
latent_tiled_size = 1024
5404+
5405+
# Load image
5406+
lq_image = load_image("https://huggingface.co/datasets/DEVAIEXP/assets/resolve/main/woman.png")
5407+
original_height = lq_image.height
5408+
original_width = lq_image.width
5409+
print(f"Current resolution: H:{original_height} x W:{original_width}")
5410+
5411+
width = original_width * int(upscale)
5412+
height = original_height * int(upscale)
5413+
print(f"Final resolution: H:{height} x W:{width}")
5414+
5415+
# Restoration
5416+
image = lq_image.resize((width, height), Image.LANCZOS)
5417+
input_image, width_init, height_init, width_now, height_now = pipe.check_image_size(image)
5418+
5419+
generator = torch.Generator(device=device).manual_seed(random.randint(0, MAX_SEED))
5420+
gen_image = pipe(lr_img=input_image,
5421+
prompt = prompt,
5422+
num_inference_steps=20,
5423+
guidance_scale=5,
5424+
generator=generator,
5425+
start_point=start_point,
5426+
height = height_now,
5427+
width=width_now,
5428+
overlap=latent_tiled_overlap,
5429+
target_size=(latent_tiled_size, latent_tiled_size)
5430+
).images[0]
5431+
5432+
cropped_image = gen_image.crop((0, 0, width_init, height_init))
5433+
cropped_image.save("data/result.png")
5434+
````
5435+
### Result
5436+
[<img src="https://huggingface.co/datasets/DEVAIEXP/assets/resolve/main/faithdiff_restored.PNG" width="512px" height="512px"/>](https://imgsli.com/MzY1NzE2)

0 commit comments

Comments
 (0)