-
Notifications
You must be signed in to change notification settings - Fork 0
sync head #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sync head #5
Conversation
* update * raise warning and round to nearest multiple of scale factor
…already set (#10918) * Bug fix in ltx * Assume packed latents. --------- Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by: YiYi Xu <yixu310@gmail.com>
no cuda only
* update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update
…XPU (#11191) Signed-off-by: YAO Matrix <matrix.yao@intel.com>
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
fix: optional componentes verification on load
* rewrite memory count without implicitly using dimensions by @ic-synth * replace F.pad by built-in padding in Conv3D * in-place sums to reduce memory allocations * fixed trailing whitespace * file reformatted * in-place sums * simpler in-place expressions * removed in-place sum, may affect backward propagation logic * removed in-place sum, may affect backward propagation logic * removed in-place sum, may affect backward propagation logic * reverted change
…e dtype (#10301) * allow models to run with a user-provided dtype map instead of a single dtype * make style * Add warning, change `_` to `default` * make style * add test * handle shared tensors * remove warning --------- Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
…#11197) * add xpu part * fix more cases * remove some cases * no canny * format fix
…ization=False in test (#11196)
* Fix enable_sequential_cpu_offload in CogView4Pipeline * make fix-copies
added onnxruntime-vitisai for custom build onnxruntime pkg
* update * update * update
…11188) * feat: [Community Pipeline] - FaithDiff Stable Diffusion XL Pipeline for Image SR. * added pipeline
* model card gen code * push modelcard creation * remove optional from params * add import * add use_dora check * correct lora var use in tags * make style && make quality --------- Co-authored-by: Aryan <aryan@huggingface.co> Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Change LoRA Loader to StableDiffusion Replace the SDXL LoRA Loader Mixin inheritance with the StableDiffusion one
update style bot workflow
Caution Review failedThe pull request is closed. WalkthroughThis update removes an outdated pre-commit script from the GitHub Actions workflow and improves documentation by adding a new “Video to Video Generation” section with detailed examples and corrections. Environment variable instructions in the installation guide have been updated. Several example scripts have been refactored to improve image and model card handling. The core library has received type‐annotation improvements, caching features, and more flexible padding and latent processing. A new pipeline, WanVideoToVideoPipeline, with supportive tests, has been added alongside various related enhancements in pipelines, utilities, and test modules. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Pipeline as WanVideoToVideoPipeline
participant Tokenizer
participant TextEncoder
participant VAE
participant Scheduler
User->>Pipeline: Provide video input & text prompt
Pipeline->>Tokenizer: Tokenize text prompt
Tokenizer->>TextEncoder: Send tokens
TextEncoder-->>Pipeline: Return prompt embeddings
Pipeline->>VAE: Prepare latents from video input
Pipeline->>Scheduler: Retrieve timesteps
loop Denoising Loop
Scheduler->>Pipeline: Next timestep information
Pipeline->>VAE: Update latents based on model output
end
Pipeline-->>User: Return processed video output
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (33)
🔥 Files not summarized due to errors (2)
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Summary by CodeRabbit
Documentation
New Features