Skip to content

Commit 54dac3a

Browse files
authored
Fix enable_sequential_cpu_offload in CogView4Pipeline (#11195)
* Fix enable_sequential_cpu_offload in CogView4Pipeline * make fix-copies
1 parent e5c6027 commit 54dac3a

File tree

2 files changed

+2
-6
lines changed

2 files changed

+2
-6
lines changed

src/diffusers/pipelines/cogview4/pipeline_cogview4.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -213,9 +213,7 @@ def _get_glm_embeds(
213213
device=text_input_ids.device,
214214
)
215215
text_input_ids = torch.cat([pad_ids, text_input_ids], dim=1)
216-
prompt_embeds = self.text_encoder(
217-
text_input_ids.to(self.text_encoder.device), output_hidden_states=True
218-
).hidden_states[-2]
216+
prompt_embeds = self.text_encoder(text_input_ids.to(device), output_hidden_states=True).hidden_states[-2]
219217

220218
prompt_embeds = prompt_embeds.to(dtype=dtype, device=device)
221219
return prompt_embeds

src/diffusers/pipelines/cogview4/pipeline_cogview4_control.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -216,9 +216,7 @@ def _get_glm_embeds(
216216
device=text_input_ids.device,
217217
)
218218
text_input_ids = torch.cat([pad_ids, text_input_ids], dim=1)
219-
prompt_embeds = self.text_encoder(
220-
text_input_ids.to(self.text_encoder.device), output_hidden_states=True
221-
).hidden_states[-2]
219+
prompt_embeds = self.text_encoder(text_input_ids.to(device), output_hidden_states=True).hidden_states[-2]
222220

223221
prompt_embeds = prompt_embeds.to(dtype=dtype, device=device)
224222
return prompt_embeds

0 commit comments

Comments
 (0)