From 75757af33e1ee18b3c5efa62f8530ab0f29eeba3 Mon Sep 17 00:00:00 2001 From: regisss <15324346+regisss@users.noreply.github.com> Date: Thu, 1 May 2025 16:53:08 +0200 Subject: [PATCH] Update Gaudi doc --- docs/source/en/_toctree.yml | 2 +- docs/source/en/optimization/habana.md | 67 +++------------------------ docs/source/ko/_toctree.yml | 2 +- docs/source/ko/optimization/habana.md | 2 +- 4 files changed, 10 insertions(+), 63 deletions(-) diff --git a/docs/source/en/_toctree.yml b/docs/source/en/_toctree.yml index 4e62f3ef6182..b364aa2ec45d 100644 --- a/docs/source/en/_toctree.yml +++ b/docs/source/en/_toctree.yml @@ -211,7 +211,7 @@ - local: optimization/mps title: Metal Performance Shaders (MPS) - local: optimization/habana - title: Habana Gaudi + title: Intel Gaudi - local: optimization/neuron title: AWS Neuron title: Optimized hardware diff --git a/docs/source/en/optimization/habana.md b/docs/source/en/optimization/habana.md index 86a0cf0ba019..10c955cbc7b0 100644 --- a/docs/source/en/optimization/habana.md +++ b/docs/source/en/optimization/habana.md @@ -10,67 +10,14 @@ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express o specific language governing permissions and limitations under the License. --> -# Habana Gaudi +# Intel Gaudi -🤗 Diffusers is compatible with Habana Gaudi through 🤗 [Optimum](https://huggingface.co/docs/optimum/habana/usage_guides/stable_diffusion). Follow the [installation](https://docs.habana.ai/en/latest/Installation_Guide/index.html) guide to install the SynapseAI and Gaudi drivers, and then install Optimum Habana: +The Intel Gaudi AI accelerator family includes [Intel Gaudi 1](https://habana.ai/products/gaudi/), [Intel Gaudi 2](https://habana.ai/products/gaudi2/), and [Intel Gaudi 3](https://habana.ai/products/gaudi3/). Each server is equipped with 8 devices, known as Habana Processing Units (HPUs), providing 128GB of memory on Gaudi 3, 96GB on Gaudi 2, and 32GB on the first-gen Gaudi. For more details on the underlying hardware architecture, check out the [Gaudi Architecture](https://docs.habana.ai/en/latest/Gaudi_Overview/Gaudi_Architecture.html) overview. -```bash -python -m pip install --upgrade-strategy eager optimum[habana] +Diffusers pipelines can easily be run on Intel Gaudi. Given a pipeline `my_pipeline`, you simply need to do the following: +```py +my_pipeline.to("hpu") ``` -To generate images with Stable Diffusion 1 and 2 on Gaudi, you need to instantiate two instances: - -- [`~optimum.habana.diffusers.GaudiStableDiffusionPipeline`], a pipeline for text-to-image generation. -- [`~optimum.habana.diffusers.GaudiDDIMScheduler`], a Gaudi-optimized scheduler. - -When you initialize the pipeline, you have to specify `use_habana=True` to deploy it on HPUs and to get the fastest possible generation, you should enable **HPU graphs** with `use_hpu_graphs=True`. - -Finally, specify a [`~optimum.habana.GaudiConfig`] which can be downloaded from the [Habana](https://huggingface.co/Habana) organization on the Hub. - -```python -from optimum.habana import GaudiConfig -from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline - -model_name = "stabilityai/stable-diffusion-2-base" -scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler") -pipeline = GaudiStableDiffusionPipeline.from_pretrained( - model_name, - scheduler=scheduler, - use_habana=True, - use_hpu_graphs=True, - gaudi_config="Habana/stable-diffusion-2", -) -``` - -Now you can call the pipeline to generate images by batches from one or several prompts: - -```python -outputs = pipeline( - prompt=[ - "High quality photo of an astronaut riding a horse in space", - "Face of a yellow cat, high resolution, sitting on a park bench", - ], - num_images_per_prompt=10, - batch_size=4, -) -``` - -For more information, check out 🤗 Optimum Habana's [documentation](https://huggingface.co/docs/optimum/habana/usage_guides/stable_diffusion) and the [example](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion) provided in the official GitHub repository. - -## Benchmark - -We benchmarked Habana's first-generation Gaudi and Gaudi2 with the [Habana/stable-diffusion](https://huggingface.co/Habana/stable-diffusion) and [Habana/stable-diffusion-2](https://huggingface.co/Habana/stable-diffusion-2) Gaudi configurations (mixed precision bf16/fp32) to demonstrate their performance. - -For [Stable Diffusion v1.5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) on 512x512 images: - -| | Latency (batch size = 1) | Throughput | -| ---------------------- |:------------------------:|:---------------------------:| -| first-generation Gaudi | 3.80s | 0.308 images/s (batch size = 8) | -| Gaudi2 | 1.33s | 1.081 images/s (batch size = 8) | - -For [Stable Diffusion v2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1) on 768x768 images: - -| | Latency (batch size = 1) | Throughput | -| ---------------------- |:------------------------:|:-------------------------------:| -| first-generation Gaudi | 10.2s | 0.108 images/s (batch size = 4) | -| Gaudi2 | 3.17s | 0.379 images/s (batch size = 8) | +> [!TIP] +> For Gaudi-optimized diffusion pipeline implementations, we recommend using [Optimum for Intel Gaudi](https://huggingface.co/docs/optimum/main/en/habana/index). diff --git a/docs/source/ko/_toctree.yml b/docs/source/ko/_toctree.yml index 05504cbadfd0..9bd5e8e9e240 100644 --- a/docs/source/ko/_toctree.yml +++ b/docs/source/ko/_toctree.yml @@ -175,7 +175,7 @@ - local: optimization/mps title: Metal Performance Shaders (MPS) - local: optimization/habana - title: Habana Gaudi + title: Intel Gaudi title: 최적화된 하드웨어 title: 추론 가속화와 메모리 줄이기 - sections: diff --git a/docs/source/ko/optimization/habana.md b/docs/source/ko/optimization/habana.md index 917d24d78539..b44049569cf7 100644 --- a/docs/source/ko/optimization/habana.md +++ b/docs/source/ko/optimization/habana.md @@ -10,7 +10,7 @@ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express o specific language governing permissions and limitations under the License. --> -# Habana Gaudi에서 Stable Diffusion을 사용하는 방법 +# Intel Gaudi에서 Stable Diffusion을 사용하는 방법 🤗 Diffusers는 🤗 [Optimum Habana](https://huggingface.co/docs/optimum/habana/usage_guides/stable_diffusion)를 통해서 Habana Gaudi와 호환됩니다.