Skip to content

Commit 4b50ecc

Browse files
Correct sdxl docs (#4058)
1 parent 99b540b commit 4b50ecc

File tree

1 file changed

+12
-12
lines changed

1 file changed

+12
-12
lines changed

docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_xl.mdx

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -134,19 +134,19 @@ image = refiner(prompt=prompt, num_inference_steps=n_steps, denoising_start=high
134134

135135
Let's have a look at the image
136136

137-
![lion_ref](https://huggingface.co/datasets/huggingface/documentation-images/blob/main/diffusers/lion_refined.png)
137+
| Original Image | Ensemble of Denoisers Experts |
138+
|---|---|
139+
| ![lion_base](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lion_base.png) | ![lion_ref](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lion_refined.png)
138140

139141
If we would have just run the base model on the same 40 steps, the image would have been arguably less detailed (e.g. the lion eyes and nose):
140142

141-
![lion_base](https://huggingface.co/datasets/huggingface/documentation-images/blob/main/diffusers/lion_base.png)
142-
143143
<Tip>
144144

145145
The ensemble-of-experts method works well on all available schedulers!
146146

147147
</Tip>
148148

149-
#### Refining the image output from fully denoised base image
149+
#### 2.) Refining the image output from fully denoised base image
150150

151151
In standard [`StableDiffusionImg2ImgPipeline`]-fashion, the fully-denoised image generated of the base model
152152
can be further improved using the [refiner checkpoint](huggingface.co/stabilityai/stable-diffusion-xl-refiner-0.9).
@@ -179,6 +179,10 @@ image = pipe(prompt=prompt, output_type="latent" if use_refiner else "pil").imag
179179
image = refiner(prompt=prompt, image=image[None, :]).images[0]
180180
```
181181

182+
| Original Image | Refined Image |
183+
|---|---|
184+
| ![](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/sd_xl/init_image.png) | ![](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/sd_xl/refined_image.png) |
185+
182186
### Image-to-image
183187

184188
```py
@@ -197,10 +201,6 @@ prompt = "a photo of an astronaut riding a horse on mars"
197201
image = pipe(prompt, image=init_image).images[0]
198202
```
199203

200-
| Original Image | Refined Image |
201-
|---|---|
202-
| ![](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/sd_xl/init_image.png) | ![](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/sd_xl/refined_image.png) |
203-
204204
### Loading single file checkpoints / original file format
205205

206206
By making use of [`~diffusers.loaders.FromSingleFileMixin.from_single_file`] you can also load the
@@ -210,13 +210,13 @@ original file format into `diffusers`:
210210
from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline
211211
import torch
212212

213-
pipe = StableDiffusionXLPipeline.from_pretrained(
214-
"stabilityai/stable-diffusion-xl-base-0.9", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
213+
pipe = StableDiffusionXLPipeline.from_single_file(
214+
"./sd_xl_base_0.9.safetensors", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
215215
)
216216
pipe.to("cuda")
217217

218-
refiner = StableDiffusionXLImg2ImgPipeline.from_pretrained(
219-
"stabilityai/stable-diffusion-xl-refiner-0.9", torch_dtype=torch.float16, use_safetensors=True, variant="fp16"
218+
refiner = StableDiffusionXLImg2ImgPipeline.from_single_file(
219+
"./sd_xl_refiner_0.9.safetensors", torch_dtype=torch.float16, use_safetensors=True, variant="fp16"
220220
)
221221
refiner.to("cuda")
222222
```

0 commit comments

Comments
 (0)