Skip to content

Commit 982f9b3

Browse files
authored
Add Example of IPAdapterScaleCutoffCallback to Docs (#10934)
* Add example of Ip-Adapter-Callback. * Add image links from HF Hub.
1 parent c9a219b commit 982f9b3

File tree

1 file changed

+78
-0
lines changed

1 file changed

+78
-0
lines changed

docs/source/en/using-diffusers/callback.md

Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -157,6 +157,84 @@ pipeline(
157157
)
158158
```
159159

160+
## IP Adapter Cutoff
161+
162+
IP Adapter is an image prompt adapter that can be used for diffusion models without any changes to the underlying model. We can use the IP Adapter Cutoff Callback to disable the IP Adapter after a certain number of steps. To set up the callback, you need to specify the number of denoising steps after which the callback comes into effect. You can do so by using either one of these two arguments:
163+
164+
- `cutoff_step_ratio`: Float number with the ratio of the steps.
165+
- `cutoff_step_index`: Integer number with the exact number of the step.
166+
167+
We need to download the diffusion model and load the ip_adapter for it as follows:
168+
169+
```py
170+
from diffusers import AutoPipelineForText2Image
171+
from diffusers.utils import load_image
172+
import torch
173+
174+
pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda")
175+
pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter_sdxl.bin")
176+
pipeline.set_ip_adapter_scale(0.6)
177+
```
178+
The setup for the callback should look something like this:
179+
180+
```py
181+
182+
from diffusers import AutoPipelineForText2Image
183+
from diffusers.callbacks import IPAdapterScaleCutoffCallback
184+
from diffusers.utils import load_image
185+
import torch
186+
187+
188+
pipeline = AutoPipelineForText2Image.from_pretrained(
189+
"stabilityai/stable-diffusion-xl-base-1.0",
190+
torch_dtype=torch.float16
191+
).to("cuda")
192+
193+
194+
pipeline.load_ip_adapter(
195+
"h94/IP-Adapter",
196+
subfolder="sdxl_models",
197+
weight_name="ip-adapter_sdxl.bin"
198+
)
199+
200+
pipeline.set_ip_adapter_scale(0.6)
201+
202+
203+
callback = IPAdapterScaleCutoffCallback(
204+
cutoff_step_ratio=None,
205+
cutoff_step_index=5
206+
)
207+
208+
image = load_image(
209+
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_diner.png"
210+
)
211+
212+
generator = torch.Generator(device="cuda").manual_seed(2628670641)
213+
214+
images = pipeline(
215+
prompt="a tiger sitting in a chair drinking orange juice",
216+
ip_adapter_image=image,
217+
negative_prompt="deformed, ugly, wrong proportion, low res, bad anatomy, worst quality, low quality",
218+
generator=generator,
219+
num_inference_steps=50,
220+
callback_on_step_end=callback,
221+
).images
222+
223+
images[0].save("custom_callback_img.png")
224+
```
225+
226+
<div class="flex gap-4">
227+
<div>
228+
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/without_callback.png" alt="generated image of a tiger sitting in a chair drinking orange juice" />
229+
<figcaption class="mt-2 text-center text-sm text-gray-500">without IPAdapterScaleCutoffCallback</figcaption>
230+
</div>
231+
<div>
232+
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/with_callback2.png" alt="generated image of a tiger sitting in a chair drinking orange juice with ip adapter callback" />
233+
<figcaption class="mt-2 text-center text-sm text-gray-500">with IPAdapterScaleCutoffCallback</figcaption>
234+
</div>
235+
</div>
236+
237+
160238
## Display image after each generation step
161239

162240
> [!TIP]

0 commit comments

Comments
 (0)