source
stringclasses 273
values | url
stringlengths 47
172
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#cpu-offloading | .md | [`~StableDiffusionPipeline.enable_sequential_cpu_offload`] is a stateful operation that installs hooks on the models.
</Tip> | 10_4_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#model-offloading | .md | <Tip>
Model offloading requires 🤗 Accelerate version 0.17.0 or higher.
</Tip>
[Sequential CPU offloading](#cpu-offloading) preserves a lot of memory but it makes inference slower because submodules are moved to GPU as needed, and they're immediately returned to the CPU when a new module runs. | 10_5_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#model-offloading | .md | Full-model offloading is an alternative that moves whole models to the GPU, instead of handling each model's constituent *submodules*. There is a negligible impact on inference time (compared with moving the pipeline to `cuda`), and it still provides some memory savings.
During model offloading, only one of the main components of the pipeline (typically the text encoder, UNet and VAE) | 10_5_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#model-offloading | .md | During model offloading, only one of the main components of the pipeline (typically the text encoder, UNet and VAE)
is placed on the GPU while the others wait on the CPU. Components like the UNet that run for multiple iterations stay on the GPU until they're no longer needed.
Enable model offloading by calling [`~StableDiffusionPipeline.enable_model_cpu_offload`] on the pipeline:
```Python
import torch
from diffusers import StableDiffusionPipeline | 10_5_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#model-offloading | .md | pipe = StableDiffusionPipeline.from_pretrained(
"stable-diffusion-v1-5/stable-diffusion-v1-5",
torch_dtype=torch.float16,
use_safetensors=True,
) | 10_5_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#model-offloading | .md | prompt = "a photo of an astronaut riding a horse on mars"
pipe.enable_model_cpu_offload()
image = pipe(prompt).images[0]
```
<Tip warning={true}> | 10_5_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#model-offloading | .md | image = pipe(prompt).images[0]
```
<Tip warning={true}>
In order to properly offload models after they're called, it is required to run the entire pipeline and models are called in the pipeline's expected order. Exercise caution if models are reused outside the context of the pipeline after hooks have been installed. See [Removing Hooks](https://huggingface.co/docs/accelerate/en/package_reference/big_modeling#accelerate.hooks.remove_hook_from_module) for more information. | 10_5_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#model-offloading | .md | [`~StableDiffusionPipeline.enable_model_cpu_offload`] is a stateful operation that installs hooks on the models and state on the pipeline.
</Tip> | 10_5_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#channels-last-memory-format | .md | The channels-last memory format is an alternative way of ordering NCHW tensors in memory to preserve dimension ordering. Channels-last tensors are ordered in such a way that the channels become the densest dimension (storing images pixel-per-pixel). Since not all operators currently support the channels-last format, it may result in worst performance but you should still try and see if it works for your model.
For example, to set the pipeline's UNet to use the channels-last format:
```python | 10_6_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#channels-last-memory-format | .md | For example, to set the pipeline's UNet to use the channels-last format:
```python
print(pipe.unet.conv_out.state_dict()["weight"].stride()) # (2880, 9, 3, 1)
pipe.unet.to(memory_format=torch.channels_last) # in-place operation
print(
pipe.unet.conv_out.state_dict()["weight"].stride()
) # (2880, 1, 960, 320) having a stride of 1 for the 2nd dimension proves that it works
``` | 10_6_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#tracing | .md | Tracing runs an example input tensor through the model and captures the operations that are performed on it as that input makes its way through the model's layers. The executable or `ScriptFunction` that is returned is optimized with just-in-time compilation.
To trace a UNet:
```python
import time
import torch
from diffusers import StableDiffusionPipeline
import functools
# torch disable grad
torch.set_grad_enabled(False)
# set variables
n_experiments = 2
unet_runs_per_experiment = 50 | 10_7_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#tracing | .md | # torch disable grad
torch.set_grad_enabled(False)
# set variables
n_experiments = 2
unet_runs_per_experiment = 50
# load inputs
def generate_inputs():
sample = torch.randn((2, 4, 64, 64), device="cuda", dtype=torch.float16)
timestep = torch.rand(1, device="cuda", dtype=torch.float16) * 999
encoder_hidden_states = torch.randn((2, 77, 768), device="cuda", dtype=torch.float16)
return sample, timestep, encoder_hidden_states | 10_7_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#tracing | .md | pipe = StableDiffusionPipeline.from_pretrained(
"stable-diffusion-v1-5/stable-diffusion-v1-5",
torch_dtype=torch.float16,
use_safetensors=True,
).to("cuda")
unet = pipe.unet
unet.eval()
unet.to(memory_format=torch.channels_last) # use channels_last memory format
unet.forward = functools.partial(unet.forward, return_dict=False) # set return_dict=False as default
# warmup
for _ in range(3):
with torch.inference_mode():
inputs = generate_inputs()
orig_output = unet(*inputs) | 10_7_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#tracing | .md | # warmup
for _ in range(3):
with torch.inference_mode():
inputs = generate_inputs()
orig_output = unet(*inputs)
# trace
print("tracing..")
unet_traced = torch.jit.trace(unet, inputs)
unet_traced.eval()
print("done tracing")
# warmup and optimize graph
for _ in range(5):
with torch.inference_mode():
inputs = generate_inputs()
orig_output = unet_traced(*inputs) | 10_7_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#tracing | .md | # benchmarking
with torch.inference_mode():
for _ in range(n_experiments):
torch.cuda.synchronize()
start_time = time.time()
for _ in range(unet_runs_per_experiment):
orig_output = unet_traced(*inputs)
torch.cuda.synchronize()
print(f"unet traced inference took {time.time() - start_time:.2f} seconds")
for _ in range(n_experiments):
torch.cuda.synchronize()
start_time = time.time()
for _ in range(unet_runs_per_experiment):
orig_output = unet(*inputs)
torch.cuda.synchronize() | 10_7_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#tracing | .md | start_time = time.time()
for _ in range(unet_runs_per_experiment):
orig_output = unet(*inputs)
torch.cuda.synchronize()
print(f"unet inference took {time.time() - start_time:.2f} seconds") | 10_7_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#tracing | .md | # save the model
unet_traced.save("unet_traced.pt")
```
Replace the `unet` attribute of the pipeline with the traced model:
```python
from diffusers import StableDiffusionPipeline
import torch
from dataclasses import dataclass
@dataclass
class UNet2DConditionOutput:
sample: torch.Tensor
pipe = StableDiffusionPipeline.from_pretrained(
"stable-diffusion-v1-5/stable-diffusion-v1-5",
torch_dtype=torch.float16,
use_safetensors=True,
).to("cuda") | 10_7_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#tracing | .md | # use jitted unet
unet_traced = torch.jit.load("unet_traced.pt")
# del pipe.unet
class TracedUNet(torch.nn.Module):
def __init__(self):
super().__init__()
self.in_channels = pipe.unet.config.in_channels
self.device = pipe.unet.device
def forward(self, latent_model_input, t, encoder_hidden_states):
sample = unet_traced(latent_model_input, t, encoder_hidden_states)[0]
return UNet2DConditionOutput(sample=sample)
pipe.unet = TracedUNet() | 10_7_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#tracing | .md | pipe.unet = TracedUNet()
with torch.inference_mode():
image = pipe([prompt] * 1, num_inference_steps=50).images[0]
``` | 10_7_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#memory-efficient-attention | .md | Recent work on optimizing bandwidth in the attention block has generated huge speed-ups and reductions in GPU memory usage. The most recent type of memory-efficient attention is [Flash Attention](https://arxiv.org/abs/2205.14135) (you can check out the original code at [HazyResearch/flash-attention](https://github.com/HazyResearch/flash-attention)).
<Tip>
If you have PyTorch >= 2.0 installed, you should not expect a speed-up for inference when enabling `xformers`.
</Tip> | 10_8_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#memory-efficient-attention | .md | If you have PyTorch >= 2.0 installed, you should not expect a speed-up for inference when enabling `xformers`.
</Tip>
To use Flash Attention, install the following:
- PyTorch > 1.12
- CUDA available
- [xFormers](xformers)
Then call [`~ModelMixin.enable_xformers_memory_efficient_attention`] on the pipeline:
```python
from diffusers import DiffusionPipeline
import torch | 10_8_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/memory.md | https://huggingface.co/docs/diffusers/en/optimization/memory/#memory-efficient-attention | .md | pipe = DiffusionPipeline.from_pretrained(
"stable-diffusion-v1-5/stable-diffusion-v1-5",
torch_dtype=torch.float16,
use_safetensors=True,
).to("cuda")
pipe.enable_xformers_memory_efficient_attention()
with torch.inference_mode():
sample = pipe("a small cat")
# optional: You can disable it via
# pipe.disable_xformers_memory_efficient_attention()
```
The iteration speed when using `xformers` should match the iteration speed of PyTorch 2.0 as described [here](torch2.0). | 10_8_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/neuron.md | https://huggingface.co/docs/diffusers/en/optimization/neuron/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 11_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/neuron.md | https://huggingface.co/docs/diffusers/en/optimization/neuron/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 11_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/neuron.md | https://huggingface.co/docs/diffusers/en/optimization/neuron/#aws-neuron | .md | Diffusers functionalities are available on [AWS Inf2 instances](https://aws.amazon.com/ec2/instance-types/inf2/), which are EC2 instances powered by [Neuron machine learning accelerators](https://aws.amazon.com/machine-learning/inferentia/). These instances aim to provide better compute performance (higher throughput, lower latency) with good cost-efficiency, making them good candidates for AWS users to deploy diffusion models to production. | 11_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/neuron.md | https://huggingface.co/docs/diffusers/en/optimization/neuron/#aws-neuron | .md | [Optimum Neuron](https://huggingface.co/docs/optimum-neuron/en/index) is the interface between Hugging Face libraries and AWS Accelerators, including AWS [Trainium](https://aws.amazon.com/machine-learning/trainium/) and AWS [Inferentia](https://aws.amazon.com/machine-learning/inferentia/). It supports many of the features in Diffusers with similar APIs, so it is easier to learn if you're already familiar with Diffusers. Once you have created an AWS Inf2 instance, install Optimum Neuron.
```bash | 11_1_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/neuron.md | https://huggingface.co/docs/diffusers/en/optimization/neuron/#aws-neuron | .md | ```bash
python -m pip install --upgrade-strategy eager optimum[neuronx]
```
<Tip>
We provide pre-built [Hugging Face Neuron Deep Learning AMI](https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2) (DLAMI) and Optimum Neuron containers for Amazon SageMaker. It's recommended to correctly set up your environment.
</Tip> | 11_1_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/neuron.md | https://huggingface.co/docs/diffusers/en/optimization/neuron/#aws-neuron | .md | </Tip>
The example below demonstrates how to generate images with the Stable Diffusion XL model on an inf2.8xlarge instance (you can switch to cheaper inf2.xlarge instances once the model is compiled). To generate some images, use the [`~optimum.neuron.NeuronStableDiffusionXLPipeline`] class, which is similar to the [`StableDiffusionXLPipeline`] class in Diffusers. | 11_1_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/neuron.md | https://huggingface.co/docs/diffusers/en/optimization/neuron/#aws-neuron | .md | Unlike Diffusers, you need to compile models in the pipeline to the Neuron format, `.neuron`. Launch the following command to export the model to the `.neuron` format.
```bash
optimum-cli export neuron --model stabilityai/stable-diffusion-xl-base-1.0 \
--batch_size 1 \
--height 1024 `# height in pixels of generated image, eg. 768, 1024` \
--width 1024 `# width in pixels of generated image, eg. 768, 1024` \
--num_images_per_prompt 1 `# number of images to generate per prompt, defaults to 1` \ | 11_1_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/neuron.md | https://huggingface.co/docs/diffusers/en/optimization/neuron/#aws-neuron | .md | --num_images_per_prompt 1 `# number of images to generate per prompt, defaults to 1` \
--auto_cast matmul `# cast only matrix multiplication operations` \
--auto_cast_type bf16 `# cast operations from FP32 to BF16` \
sd_neuron_xl/
```
Now generate some images with the pre-compiled SDXL model.
```python
>>> from optimum.neuron import NeuronStableDiffusionXLPipeline | 11_1_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/neuron.md | https://huggingface.co/docs/diffusers/en/optimization/neuron/#aws-neuron | .md | >>> stable_diffusion_xl = NeuronStableDiffusionXLPipeline.from_pretrained("sd_neuron_xl/")
>>> prompt = "a pig with wings flying in floating US dollar banknotes in the air, skyscrapers behind, warm color palette, muted colors, detailed, 8k"
>>> image = stable_diffusion_xl(prompt).images[0]
```
<img
src="https://huggingface.co/datasets/Jingya/document_images/resolve/main/optimum/neuron/sdxl_pig.png"
width="256"
height="256"
alt="peggy generated by sdxl on inf2"
/> | 11_1_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/neuron.md | https://huggingface.co/docs/diffusers/en/optimization/neuron/#aws-neuron | .md | width="256"
height="256"
alt="peggy generated by sdxl on inf2"
/>
Feel free to check out more guides and examples on different use cases from the Optimum Neuron [documentation](https://huggingface.co/docs/optimum-neuron/en/inference_tutorials/stable_diffusion#generate-images-with-stable-diffusion-models-on-aws-inferentia)! | 11_1_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/habana.md | https://huggingface.co/docs/diffusers/en/optimization/habana/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 12_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/habana.md | https://huggingface.co/docs/diffusers/en/optimization/habana/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 12_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/habana.md | https://huggingface.co/docs/diffusers/en/optimization/habana/#habana-gaudi | .md | 🤗 Diffusers is compatible with Habana Gaudi through 🤗 [Optimum](https://huggingface.co/docs/optimum/habana/usage_guides/stable_diffusion). Follow the [installation](https://docs.habana.ai/en/latest/Installation_Guide/index.html) guide to install the SynapseAI and Gaudi drivers, and then install Optimum Habana:
```bash
python -m pip install --upgrade-strategy eager optimum[habana]
```
To generate images with Stable Diffusion 1 and 2 on Gaudi, you need to instantiate two instances: | 12_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/habana.md | https://huggingface.co/docs/diffusers/en/optimization/habana/#habana-gaudi | .md | ```
To generate images with Stable Diffusion 1 and 2 on Gaudi, you need to instantiate two instances:
- [`~optimum.habana.diffusers.GaudiStableDiffusionPipeline`], a pipeline for text-to-image generation.
- [`~optimum.habana.diffusers.GaudiDDIMScheduler`], a Gaudi-optimized scheduler.
When you initialize the pipeline, you have to specify `use_habana=True` to deploy it on HPUs and to get the fastest possible generation, you should enable **HPU graphs** with `use_hpu_graphs=True`. | 12_1_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/habana.md | https://huggingface.co/docs/diffusers/en/optimization/habana/#habana-gaudi | .md | Finally, specify a [`~optimum.habana.GaudiConfig`] which can be downloaded from the [Habana](https://huggingface.co/Habana) organization on the Hub.
```python
from optimum.habana import GaudiConfig
from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline | 12_1_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/habana.md | https://huggingface.co/docs/diffusers/en/optimization/habana/#habana-gaudi | .md | model_name = "stabilityai/stable-diffusion-2-base"
scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler")
pipeline = GaudiStableDiffusionPipeline.from_pretrained(
model_name,
scheduler=scheduler,
use_habana=True,
use_hpu_graphs=True,
gaudi_config="Habana/stable-diffusion-2",
)
```
Now you can call the pipeline to generate images by batches from one or several prompts:
```python
outputs = pipeline(
prompt=[
"High quality photo of an astronaut riding a horse in space", | 12_1_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/habana.md | https://huggingface.co/docs/diffusers/en/optimization/habana/#habana-gaudi | .md | ```python
outputs = pipeline(
prompt=[
"High quality photo of an astronaut riding a horse in space",
"Face of a yellow cat, high resolution, sitting on a park bench",
],
num_images_per_prompt=10,
batch_size=4,
)
```
For more information, check out 🤗 Optimum Habana's [documentation](https://huggingface.co/docs/optimum/habana/usage_guides/stable_diffusion) and the [example](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion) provided in the official GitHub repository. | 12_1_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/habana.md | https://huggingface.co/docs/diffusers/en/optimization/habana/#benchmark | .md | We benchmarked Habana's first-generation Gaudi and Gaudi2 with the [Habana/stable-diffusion](https://huggingface.co/Habana/stable-diffusion) and [Habana/stable-diffusion-2](https://huggingface.co/Habana/stable-diffusion-2) Gaudi configurations (mixed precision bf16/fp32) to demonstrate their performance.
For [Stable Diffusion v1.5](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5) on 512x512 images:
| | Latency (batch size = 1) | Throughput | | 12_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/habana.md | https://huggingface.co/docs/diffusers/en/optimization/habana/#benchmark | .md | | | Latency (batch size = 1) | Throughput |
| ---------------------- |:------------------------:|:---------------------------:|
| first-generation Gaudi | 3.80s | 0.308 images/s (batch size = 8) |
| Gaudi2 | 1.33s | 1.081 images/s (batch size = 8) |
For [Stable Diffusion v2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1) on 768x768 images: | 12_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/habana.md | https://huggingface.co/docs/diffusers/en/optimization/habana/#benchmark | .md | For [Stable Diffusion v2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1) on 768x768 images:
| | Latency (batch size = 1) | Throughput |
| ---------------------- |:------------------------:|:-------------------------------:|
| first-generation Gaudi | 10.2s | 0.108 images/s (batch size = 4) |
| Gaudi2 | 3.17s | 0.379 images/s (batch size = 8) | | 12_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/deepcache.md | https://huggingface.co/docs/diffusers/en/optimization/deepcache/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 13_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/deepcache.md | https://huggingface.co/docs/diffusers/en/optimization/deepcache/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 13_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/deepcache.md | https://huggingface.co/docs/diffusers/en/optimization/deepcache/#deepcache | .md | [DeepCache](https://huggingface.co/papers/2312.00858) accelerates [`StableDiffusionPipeline`] and [`StableDiffusionXLPipeline`] by strategically caching and reusing high-level features while efficiently updating low-level features by taking advantage of the U-Net architecture.
Start by installing [DeepCache](https://github.com/horseee/DeepCache):
```bash
pip install DeepCache
```
Then load and enable the [`DeepCacheSDHelper`](https://github.com/horseee/DeepCache#usage):
```diff
import torch | 13_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/deepcache.md | https://huggingface.co/docs/diffusers/en/optimization/deepcache/#deepcache | .md | ```
Then load and enable the [`DeepCacheSDHelper`](https://github.com/horseee/DeepCache#usage):
```diff
import torch
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained('stable-diffusion-v1-5/stable-diffusion-v1-5', torch_dtype=torch.float16).to("cuda") | 13_1_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/deepcache.md | https://huggingface.co/docs/diffusers/en/optimization/deepcache/#deepcache | .md | + from DeepCache import DeepCacheSDHelper
+ helper = DeepCacheSDHelper(pipe=pipe)
+ helper.set_params(
+ cache_interval=3,
+ cache_branch_id=0,
+ )
+ helper.enable() | 13_1_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/deepcache.md | https://huggingface.co/docs/diffusers/en/optimization/deepcache/#deepcache | .md | image = pipe("a photo of an astronaut on a moon").images[0]
```
The `set_params` method accepts two arguments: `cache_interval` and `cache_branch_id`. `cache_interval` means the frequency of feature caching, specified as the number of steps between each cache operation. `cache_branch_id` identifies which branch of the network (ordered from the shallowest to the deepest layer) is responsible for executing the caching processes. | 13_1_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/deepcache.md | https://huggingface.co/docs/diffusers/en/optimization/deepcache/#deepcache | .md | Opting for a lower `cache_branch_id` or a larger `cache_interval` can lead to faster inference speed at the expense of reduced image quality (ablation experiments of these two hyperparameters can be found in the [paper](https://arxiv.org/abs/2312.00858)). Once those arguments are set, use the `enable` or `disable` methods to activate or deactivate the `DeepCacheSDHelper`.
<div class="flex justify-center">
<img src="https://github.com/horseee/Diffusion_DeepCache/raw/master/static/images/example.png"> | 13_1_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/deepcache.md | https://huggingface.co/docs/diffusers/en/optimization/deepcache/#deepcache | .md | <img src="https://github.com/horseee/Diffusion_DeepCache/raw/master/static/images/example.png">
</div>
You can find more generated samples (original pipeline vs DeepCache) and the corresponding inference latency in the [WandB report](https://wandb.ai/horseee/DeepCache/runs/jwlsqqgt?workspace=user-horseee). The prompts are randomly selected from the [MS-COCO 2017](https://cocodataset.org/#home) dataset. | 13_1_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/deepcache.md | https://huggingface.co/docs/diffusers/en/optimization/deepcache/#benchmark | .md | We tested how much faster DeepCache accelerates [Stable Diffusion v2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1) with 50 inference steps on an NVIDIA RTX A5000, using different configurations for resolution, batch size, cache interval (I), and cache branch (B).
| **Resolution** | **Batch size** | **Original** | **DeepCache(I=3, B=0)** | **DeepCache(I=5, B=0)** | **DeepCache(I=5, B=1)** | | 13_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/deepcache.md | https://huggingface.co/docs/diffusers/en/optimization/deepcache/#benchmark | .md | |----------------|----------------|--------------|-------------------------|-------------------------|-------------------------|
| 512| 8| 15.96| 6.88(2.32x)| 5.03(3.18x)| 7.27(2.20x)|
| | 4| 8.39| 3.60(2.33x)| 2.62(3.21x)| 3.75(2.24x)| | 13_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/deepcache.md | https://huggingface.co/docs/diffusers/en/optimization/deepcache/#benchmark | .md | | | 1| 2.61| 1.12(2.33x)| 0.81(3.24x)| 1.11(2.35x)|
| 768| 8| 43.58| 18.99(2.29x)| 13.96(3.12x)| 21.27(2.05x)|
| | 4| 22.24| 9.67(2.30x)| 7.10(3.13x)| 10.74(2.07x)| | 13_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/deepcache.md | https://huggingface.co/docs/diffusers/en/optimization/deepcache/#benchmark | .md | | | 1| 6.33| 2.72(2.33x)| 1.97(3.21x)| 2.98(2.12x)|
| 1024| 8| 101.95| 45.57(2.24x)| 33.72(3.02x)| 53.00(1.92x)|
| | 4| 49.25| 21.86(2.25x)| 16.19(3.04x)| 25.78(1.91x)| | 13_2_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/deepcache.md | https://huggingface.co/docs/diffusers/en/optimization/deepcache/#benchmark | .md | | | 1| 13.83| 6.07(2.28x)| 4.43(3.12x)| 7.15(1.93x)| | 13_2_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/open_vino.md | https://huggingface.co/docs/diffusers/en/optimization/open_vino/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 14_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/open_vino.md | https://huggingface.co/docs/diffusers/en/optimization/open_vino/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 14_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/open_vino.md | https://huggingface.co/docs/diffusers/en/optimization/open_vino/#openvino | .md | 🤗 [Optimum](https://github.com/huggingface/optimum-intel) provides Stable Diffusion pipelines compatible with OpenVINO to perform inference on a variety of Intel processors (see the [full list](https://docs.openvino.ai/latest/openvino_docs_OV_UG_supported_plugins_Supported_Devices.html) of supported devices).
You'll need to install 🤗 Optimum Intel with the `--upgrade-strategy eager` option to ensure [`optimum-intel`](https://github.com/huggingface/optimum-intel) is using the latest version:
```bash | 14_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/open_vino.md | https://huggingface.co/docs/diffusers/en/optimization/open_vino/#openvino | .md | ```bash
pip install --upgrade-strategy eager optimum["openvino"]
```
This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with OpenVINO. | 14_1_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/open_vino.md | https://huggingface.co/docs/diffusers/en/optimization/open_vino/#stable-diffusion | .md | To load and run inference, use the [`~optimum.intel.OVStableDiffusionPipeline`]. If you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, set `export=True`:
```python
from optimum.intel import OVStableDiffusionPipeline
model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
pipeline = OVStableDiffusionPipeline.from_pretrained(model_id, export=True)
prompt = "sailing ship in storm by Rembrandt"
image = pipeline(prompt).images[0] | 14_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/open_vino.md | https://huggingface.co/docs/diffusers/en/optimization/open_vino/#stable-diffusion | .md | # Don't forget to save the exported model
pipeline.save_pretrained("openvino-sd-v1-5")
```
To further speed-up inference, statically reshape the model. If you change any parameters such as the outputs height or width, you’ll need to statically reshape your model again.
```python
# Define the shapes related to the inputs and desired outputs
batch_size, num_images, height, width = 1, 1, 512, 512 | 14_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/open_vino.md | https://huggingface.co/docs/diffusers/en/optimization/open_vino/#stable-diffusion | .md | # Statically reshape the model
pipeline.reshape(batch_size, height, width, num_images)
# Compile the model before inference
pipeline.compile() | 14_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/open_vino.md | https://huggingface.co/docs/diffusers/en/optimization/open_vino/#stable-diffusion | .md | image = pipeline(
prompt,
height=height,
width=width,
num_images_per_prompt=num_images,
).images[0]
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/intel/openvino/stable_diffusion_v1_5_sail_boat_rembrandt.png">
</div>
You can find more examples in the 🤗 Optimum [documentation](https://huggingface.co/docs/optimum/intel/inference#stable-diffusion), and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting. | 14_2_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/open_vino.md | https://huggingface.co/docs/diffusers/en/optimization/open_vino/#stable-diffusion-xl | .md | To load and run inference with SDXL, use the [`~optimum.intel.OVStableDiffusionXLPipeline`]:
```python
from optimum.intel import OVStableDiffusionXLPipeline | 14_3_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/open_vino.md | https://huggingface.co/docs/diffusers/en/optimization/open_vino/#stable-diffusion-xl | .md | model_id = "stabilityai/stable-diffusion-xl-base-1.0"
pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id)
prompt = "sailing ship in storm by Rembrandt"
image = pipeline(prompt).images[0]
```
To further speed-up inference, [statically reshape](#stable-diffusion) the model as shown in the Stable Diffusion section. | 14_3_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/open_vino.md | https://huggingface.co/docs/diffusers/en/optimization/open_vino/#stable-diffusion-xl | .md | To further speed-up inference, [statically reshape](#stable-diffusion) the model as shown in the Stable Diffusion section.
You can find more examples in the 🤗 Optimum [documentation](https://huggingface.co/docs/optimum/intel/inference#stable-diffusion-xl), and running SDXL in OpenVINO is supported for text-to-image and image-to-image. | 14_3_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 15_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 15_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/#speed-up-inference | .md | There are several ways to optimize Diffusers for inference speed, such as reducing the computational burden by lowering the data precision or using a lightweight distilled model. There are also memory-efficient attention implementations, [xFormers](xformers) and [scaled dot product attention](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) in PyTorch 2.0, that reduce memory usage which also indirectly speeds up inference. Different speed optimizations can be | 15_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/#speed-up-inference | .md | in PyTorch 2.0, that reduce memory usage which also indirectly speeds up inference. Different speed optimizations can be stacked together to get the fastest inference times. | 15_1_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/#speed-up-inference | .md | > [!TIP]
> Optimizing for inference speed or reduced memory usage can lead to improved performance in the other category, so you should try to optimize for both whenever you can. This guide focuses on inference speed, but you can learn more about lowering memory usage in the [Reduce memory usage](memory) guide.
The inference times below are obtained from generating a single 512x512 image from the prompt "a photo of an astronaut riding a horse on mars" with 50 DDIM steps on a NVIDIA A100. | 15_1_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/#speed-up-inference | .md | | setup | latency | speed-up |
|----------|---------|----------|
| baseline | 5.27s | x1 |
| tf32 | 4.14s | x1.27 |
| fp16 | 3.51s | x1.50 |
| combined | 3.41s | x1.54 | | 15_1_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/#tensorfloat-32 | .md | On Ampere and later CUDA devices, matrix multiplications and convolutions can use the [TensorFloat-32 (tf32)](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/) mode for faster, but slightly less accurate computations. By default, PyTorch enables tf32 mode for convolutions but not matrix multiplications. Unless your network requires full float32 precision, we recommend enabling tf32 for matrix multiplications. It can significantly speed up computations with typically negligible loss | 15_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/#tensorfloat-32 | .md | recommend enabling tf32 for matrix multiplications. It can significantly speed up computations with typically negligible loss in numerical accuracy. | 15_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/#tensorfloat-32 | .md | ```python
import torch | 15_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/#tensorfloat-32 | .md | torch.backends.cuda.matmul.allow_tf32 = True
```
Learn more about tf32 in the [Mixed precision training](https://huggingface.co/docs/transformers/en/perf_train_gpu_one#tf32) guide. | 15_2_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/#half-precision-weights | .md | To save GPU memory and get more speed, set `torch_dtype=torch.float16` to load and run the model weights directly with half-precision weights.
```Python
import torch
from diffusers import DiffusionPipeline | 15_3_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/#half-precision-weights | .md | pipe = DiffusionPipeline.from_pretrained(
"stable-diffusion-v1-5/stable-diffusion-v1-5",
torch_dtype=torch.float16,
use_safetensors=True,
)
pipe = pipe.to("cuda")
```
> [!WARNING]
> Don't use [torch.autocast](https://pytorch.org/docs/stable/amp.html#torch.autocast) in any of the pipelines as it can lead to black images and is always slower than pure float16 precision. | 15_3_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/#distilled-model | .md | You could also use a distilled Stable Diffusion model and autoencoder to speed up inference. During distillation, many of the UNet's residual and attention blocks are shed to reduce the model size by 51% and improve latency on CPU/GPU by 43%. The distilled model is faster and uses less memory while generating images of comparable quality to the full Stable Diffusion model.
> [!TIP] | 15_4_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/#distilled-model | .md | > [!TIP]
> Read the [Open-sourcing Knowledge Distillation Code and Weights of SD-Small and SD-Tiny](https://huggingface.co/blog/sd_distillation) blog post to learn more about how knowledge distillation training works to produce a faster, smaller, and cheaper generative model. | 15_4_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/#distilled-model | .md | The inference times below are obtained from generating 4 images from the prompt "a photo of an astronaut riding a horse on mars" with 25 PNDM steps on a NVIDIA A100. Each generation is repeated 3 times with the distilled Stable Diffusion v1.4 model by [Nota AI](https://hf.co/nota-ai).
| setup | latency | speed-up |
|------------------------------|---------|----------|
| baseline | 6.37s | x1 |
| distilled | 4.18s | x1.52 | | 15_4_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/#distilled-model | .md | | baseline | 6.37s | x1 |
| distilled | 4.18s | x1.52 |
| distilled + tiny autoencoder | 3.83s | x1.66 |
Let's load the distilled Stable Diffusion model and compare it against the original Stable Diffusion model.
```py
from diffusers import StableDiffusionPipeline
import torch | 15_4_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/#distilled-model | .md | distilled = StableDiffusionPipeline.from_pretrained(
"nota-ai/bk-sdm-small", torch_dtype=torch.float16, use_safetensors=True,
).to("cuda")
prompt = "a golden vase with different flowers"
generator = torch.manual_seed(2023)
image = distilled("a golden vase with different flowers", num_inference_steps=25, generator=generator).images[0]
image
```
<div class="flex gap-4">
<div> | 15_4_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/#distilled-model | .md | image
```
<div class="flex gap-4">
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/original_sd.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">original Stable Diffusion</figcaption>
</div>
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/distilled_sd.png"/> | 15_4_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/#distilled-model | .md | <figcaption class="mt-2 text-center text-sm text-gray-500">distilled Stable Diffusion</figcaption>
</div>
</div> | 15_4_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/#tiny-autoencoder | .md | To speed inference up even more, replace the autoencoder with a [distilled version](https://huggingface.co/sayakpaul/taesdxl-diffusers) of it.
```py
import torch
from diffusers import AutoencoderTiny, StableDiffusionPipeline
distilled = StableDiffusionPipeline.from_pretrained(
"nota-ai/bk-sdm-small", torch_dtype=torch.float16, use_safetensors=True,
).to("cuda")
distilled.vae = AutoencoderTiny.from_pretrained(
"sayakpaul/taesd-diffusers", torch_dtype=torch.float16, use_safetensors=True,
).to("cuda") | 15_5_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/#tiny-autoencoder | .md | prompt = "a golden vase with different flowers"
generator = torch.manual_seed(2023)
image = distilled("a golden vase with different flowers", num_inference_steps=25, generator=generator).images[0]
image
```
<div class="flex justify-center">
<div>
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/distilled_sd_vae.png" />
<figcaption class="mt-2 text-center text-sm text-gray-500">distilled Stable Diffusion + Tiny AutoEncoder</figcaption> | 15_5_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/fp16.md | https://huggingface.co/docs/diffusers/en/optimization/fp16/#tiny-autoencoder | .md | <figcaption class="mt-2 text-center text-sm text-gray-500">distilled Stable Diffusion + Tiny AutoEncoder</figcaption>
</div>
</div>
More tiny autoencoder models for other Stable Diffusion models, like Stable Diffusion 3, are available from [madebyollin](https://huggingface.co/madebyollin). | 15_5_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xdit.md | https://huggingface.co/docs/diffusers/en/optimization/xdit/#xdit | .md | [xDiT](https://github.com/xdit-project/xDiT) is an inference engine designed for the large scale parallel deployment of Diffusion Transformers (DiTs). xDiT provides a suite of efficient parallel approaches for Diffusion Models, as well as GPU kernel accelerations. | 16_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xdit.md | https://huggingface.co/docs/diffusers/en/optimization/xdit/#xdit | .md | There are four parallel methods supported in xDiT, including [Unified Sequence Parallelism](https://arxiv.org/abs/2405.07719), [PipeFusion](https://arxiv.org/abs/2405.14430), CFG parallelism and data parallelism. The four parallel methods in xDiT can be configured in a hybrid manner, optimizing communication patterns to best suit the underlying network hardware. | 16_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xdit.md | https://huggingface.co/docs/diffusers/en/optimization/xdit/#xdit | .md | Optimization orthogonal to parallelization focuses on accelerating single GPU performance. In addition to utilizing well-known Attention optimization libraries, we leverage compilation acceleration technologies such as torch.compile and onediff.
The overview of xDiT is shown as follows.
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/xDiT/documentation-images/resolve/main/methods/xdit_overview.png">
</div>
You can install xDiT using the following command:
```bash | 16_0_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xdit.md | https://huggingface.co/docs/diffusers/en/optimization/xdit/#xdit | .md | </div>
You can install xDiT using the following command:
```bash
pip install xfuser
```
Here's an example of using xDiT to accelerate inference of a Diffusers model.
```diff
import torch
from diffusers import StableDiffusion3Pipeline | 16_0_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xdit.md | https://huggingface.co/docs/diffusers/en/optimization/xdit/#xdit | .md | from xfuser import xFuserArgs, xDiTParallel
from xfuser.config import FlexibleArgumentParser
from xfuser.core.distributed import get_world_group
def main():
+ parser = FlexibleArgumentParser(description="xFuser Arguments")
+ args = xFuserArgs.add_cli_args(parser).parse_args()
+ engine_args = xFuserArgs.from_cli_args(args)
+ engine_config, input_config = engine_args.create_config() | 16_0_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xdit.md | https://huggingface.co/docs/diffusers/en/optimization/xdit/#xdit | .md | local_rank = get_world_group().local_rank
pipe = StableDiffusion3Pipeline.from_pretrained(
pretrained_model_name_or_path=engine_config.model_config.model,
torch_dtype=torch.float16,
).to(f"cuda:{local_rank}")
# do anything you want with pipeline here
+ pipe = xDiTParallel(pipe, engine_config, input_config) | 16_0_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xdit.md | https://huggingface.co/docs/diffusers/en/optimization/xdit/#xdit | .md | # do anything you want with pipeline here
+ pipe = xDiTParallel(pipe, engine_config, input_config)
pipe(
height=input_config.height,
width=input_config.height,
prompt=input_config.prompt,
num_inference_steps=input_config.num_inference_steps,
output_type=input_config.output_type,
generator=torch.Generator(device="cuda").manual_seed(input_config.seed),
)
+ if input_config.output_type == "pil":
+ pipe.save("results", "stable_diffusion_3")
if __name__ == "__main__":
main() | 16_0_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xdit.md | https://huggingface.co/docs/diffusers/en/optimization/xdit/#xdit | .md | ```
As you can see, we only need to use xFuserArgs from xDiT to get configuration parameters, and pass these parameters along with the pipeline object from the Diffusers library into xDiTParallel to complete the parallelization of a specific pipeline in Diffusers.
xDiT runtime parameters can be viewed in the command line using `-h`, and you can refer to this [usage](https://github.com/xdit-project/xDiT?tab=readme-ov-file#2-usage) example for more details. | 16_0_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xdit.md | https://huggingface.co/docs/diffusers/en/optimization/xdit/#xdit | .md | xDiT needs to be launched using torchrun to support its multi-node, multi-GPU parallel capabilities. For example, the following command can be used for 8-GPU parallel inference:
```bash
torchrun --nproc_per_node=8 ./inference.py --model models/FLUX.1-dev --data_parallel_degree 2 --ulysses_degree 2 --ring_degree 2 --prompt "A snowy mountain" "A small dog" --num_inference_steps 50
``` | 16_0_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xdit.md | https://huggingface.co/docs/diffusers/en/optimization/xdit/#supported-models | .md | A subset of Diffusers models are supported in xDiT, such as Flux.1, Stable Diffusion 3, etc. The latest supported models can be found [here](https://github.com/xdit-project/xDiT?tab=readme-ov-file#-supported-dits). | 16_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xdit.md | https://huggingface.co/docs/diffusers/en/optimization/xdit/#benchmark | .md | We tested different models on various machines, and here is some of the benchmark data. | 16_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/optimization/xdit.md | https://huggingface.co/docs/diffusers/en/optimization/xdit/#flux1-schnell | .md | <div class="flex justify-center">
<img src="https://huggingface.co/datasets/xDiT/documentation-images/resolve/main/performance/flux/Flux-2k-L40.png">
</div>
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/xDiT/documentation-images/resolve/main/performance/flux/Flux-2K-A100.png">
</div> | 16_3_0 |
Subsets and Splits