source
stringclasses 273
values | url
stringlengths 47
172
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/tcd.md | https://huggingface.co/docs/diffusers/en/api/schedulers/tcd/#tcdscheduler | .md | (Algorithm 1) in the [Consistency Models](https://huggingface.co/papers/2303.01469), Strategic Stochastic Sampling specifically tailored for the trajectory consistency function. | 258_1_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/tcd.md | https://huggingface.co/docs/diffusers/en/api/schedulers/tcd/#tcdscheduler | .md | The abstract from the paper is: | 258_1_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/tcd.md | https://huggingface.co/docs/diffusers/en/api/schedulers/tcd/#tcdscheduler | .md | *Latent Consistency Model (LCM) extends the Consistency Model to the latent space and leverages the guided consistency distillation technique to achieve impressive performance in accelerating text-to-image synthesis. However, we observed that LCM struggles to generate images with both clarity and detailed intricacy. To address this limitation, we initially delve into and elucidate the underlying causes. Our investigation identifies that the primary issue stems from errors in three distinct areas. | 258_1_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/tcd.md | https://huggingface.co/docs/diffusers/en/api/schedulers/tcd/#tcdscheduler | .md | elucidate the underlying causes. Our investigation identifies that the primary issue stems from errors in three distinct areas. Consequently, we introduce Trajectory Consistency Distillation (TCD), which encompasses trajectory consistency function and strategic stochastic sampling. The trajectory consistency function diminishes the distillation errors by broadening the scope of the self-consistency boundary condition and endowing the TCD with the ability to accurately trace the entire trajectory of the | 258_1_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/tcd.md | https://huggingface.co/docs/diffusers/en/api/schedulers/tcd/#tcdscheduler | .md | the self-consistency boundary condition and endowing the TCD with the ability to accurately trace the entire trajectory of the Probability Flow ODE. Additionally, strategic stochastic sampling is specifically designed to circumvent the accumulated errors inherent in multi-step consistency sampling, which is meticulously tailored to complement the TCD model. Experiments demonstrate that TCD not only significantly enhances image quality at low NFEs but also yields more detailed results compared to the | 258_1_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/tcd.md | https://huggingface.co/docs/diffusers/en/api/schedulers/tcd/#tcdscheduler | .md | that TCD not only significantly enhances image quality at low NFEs but also yields more detailed results compared to the teacher model at high NFEs.* | 258_1_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/tcd.md | https://huggingface.co/docs/diffusers/en/api/schedulers/tcd/#tcdscheduler | .md | The original codebase can be found at [jabir-zheng/TCD](https://github.com/jabir-zheng/TCD). | 258_1_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/tcd.md | https://huggingface.co/docs/diffusers/en/api/schedulers/tcd/#tcdscheduler | .md | TCDScheduler
`TCDScheduler` incorporates the `Strategic Stochastic Sampling` introduced by the paper `Trajectory Consistency
Distillation`, extending the original Multistep Consistency Sampling to enable unrestricted trajectory traversal.
This code is based on the official repo of TCD(https://github.com/jabir-zheng/TCD).
This model inherits from [`SchedulerMixin`] and [`ConfigMixin`]. [`~ConfigMixin`] takes care of storing all config | 258_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/tcd.md | https://huggingface.co/docs/diffusers/en/api/schedulers/tcd/#tcdscheduler | .md | This model inherits from [`SchedulerMixin`] and [`ConfigMixin`]. [`~ConfigMixin`] takes care of storing all config
attributes that are passed in the scheduler's `__init__` function, such as `num_train_timesteps`. They can be
accessed via `scheduler.config.num_train_timesteps`. [`SchedulerMixin`] provides general loading and saving
functionality via the [`SchedulerMixin.save_pretrained`] and [`~SchedulerMixin.from_pretrained`] functions.
Args:
num_train_timesteps (`int`, defaults to 1000): | 258_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/tcd.md | https://huggingface.co/docs/diffusers/en/api/schedulers/tcd/#tcdscheduler | .md | Args:
num_train_timesteps (`int`, defaults to 1000):
The number of diffusion steps to train the model.
beta_start (`float`, defaults to 0.0001):
The starting `beta` value of inference.
beta_end (`float`, defaults to 0.02):
The final `beta` value.
beta_schedule (`str`, defaults to `"linear"`):
The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
`linear`, `scaled_linear`, or `squaredcos_cap_v2`.
trained_betas (`np.ndarray`, *optional*): | 258_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/tcd.md | https://huggingface.co/docs/diffusers/en/api/schedulers/tcd/#tcdscheduler | .md | `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
trained_betas (`np.ndarray`, *optional*):
Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
original_inference_steps (`int`, *optional*, defaults to 50):
The default number of inference steps used to generate a linearly-spaced timestep schedule, from which we
will ultimately take `num_inference_steps` evenly spaced timesteps to form the final timestep schedule.
clip_sample (`bool`, defaults to `True`): | 258_2_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/tcd.md | https://huggingface.co/docs/diffusers/en/api/schedulers/tcd/#tcdscheduler | .md | clip_sample (`bool`, defaults to `True`):
Clip the predicted sample for numerical stability.
clip_sample_range (`float`, defaults to 1.0):
The maximum magnitude for sample clipping. Valid only when `clip_sample=True`.
set_alpha_to_one (`bool`, defaults to `True`):
Each diffusion step uses the alphas product value at that step and at the previous one. For the final step
there is no previous alpha. When this option is `True` the previous alpha product is fixed to `1`, | 258_2_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/tcd.md | https://huggingface.co/docs/diffusers/en/api/schedulers/tcd/#tcdscheduler | .md | there is no previous alpha. When this option is `True` the previous alpha product is fixed to `1`,
otherwise it uses the alpha value at step 0.
steps_offset (`int`, defaults to 0):
An offset added to the inference steps, as required by some model families.
prediction_type (`str`, defaults to `epsilon`, *optional*):
Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process), | 258_2_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/tcd.md | https://huggingface.co/docs/diffusers/en/api/schedulers/tcd/#tcdscheduler | .md | Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
`sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
Video](https://imagen.research.google/video/paper.pdf) paper).
thresholding (`bool`, defaults to `False`):
Whether to use the "dynamic thresholding" method. This is unsuitable for latent-space diffusion models such
as Stable Diffusion.
dynamic_thresholding_ratio (`float`, defaults to 0.995): | 258_2_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/tcd.md | https://huggingface.co/docs/diffusers/en/api/schedulers/tcd/#tcdscheduler | .md | as Stable Diffusion.
dynamic_thresholding_ratio (`float`, defaults to 0.995):
The ratio for the dynamic thresholding method. Valid only when `thresholding=True`.
sample_max_value (`float`, defaults to 1.0):
The threshold value for dynamic thresholding. Valid only when `thresholding=True`.
timestep_spacing (`str`, defaults to `"leading"`):
The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and | 258_2_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/tcd.md | https://huggingface.co/docs/diffusers/en/api/schedulers/tcd/#tcdscheduler | .md | The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
timestep_scaling (`float`, defaults to 10.0):
The factor the timesteps will be multiplied by when calculating the consistency model boundary conditions
`c_skip` and `c_out`. Increasing this will decrease the approximation error (although the approximation
error at the default of `10.0` is already pretty small). | 258_2_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/tcd.md | https://huggingface.co/docs/diffusers/en/api/schedulers/tcd/#tcdscheduler | .md | error at the default of `10.0` is already pretty small).
rescale_betas_zero_snr (`bool`, defaults to `False`):
Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and
dark samples instead of limiting it to samples with medium brightness. Loosely related to
[`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506). | 258_2_9 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/tcd.md | https://huggingface.co/docs/diffusers/en/api/schedulers/tcd/#tcdscheduleroutput | .md | TCDSchedulerOutput
Output class for the scheduler's `step` function output.
Args:
prev_sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
denoising loop.
pred_noised_sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
The predicted noised sample `(x_{s})` based on the model output from the current timestep. | 258_3_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/dpm_sde.md | https://huggingface.co/docs/diffusers/en/api/schedulers/dpm_sde/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 259_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/dpm_sde.md | https://huggingface.co/docs/diffusers/en/api/schedulers/dpm_sde/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 259_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/dpm_sde.md | https://huggingface.co/docs/diffusers/en/api/schedulers/dpm_sde/#dpmsolversdescheduler | .md | The `DPMSolverSDEScheduler` is inspired by the stochastic sampler from the [Elucidating the Design Space of Diffusion-Based Generative Models](https://huggingface.co/papers/2206.00364) paper, and the scheduler is ported from and created by [Katherine Crowson](https://github.com/crowsonkb/). | 259_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/dpm_sde.md | https://huggingface.co/docs/diffusers/en/api/schedulers/dpm_sde/#dpmsolversdescheduler | .md | DPMSolverSDEScheduler | 259_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/dpm_sde.md | https://huggingface.co/docs/diffusers/en/api/schedulers/dpm_sde/#scheduleroutput | .md | SchedulerOutput
Base class for the output of a scheduler's `step` function.
Args:
prev_sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
denoising loop. | 259_3_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 260_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 260_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim/#ddimscheduler | .md | [Denoising Diffusion Implicit Models](https://huggingface.co/papers/2010.02502) (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon.
The abstract from the paper is:
*Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. | 260_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim/#ddimscheduler | .md | To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models
with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process.
We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. | 260_1_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim/#ddimscheduler | .md | We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.*
The original codebase of this paper can be found at [ermongroup/ddim](https://github.com/ermongroup/ddim), and you can contact the author on [tsong.me](https://tsong.me/). | 260_1_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim/#tips | .md | The paper [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) claims that a mismatch between the training and inference settings leads to suboptimal inference generation results for Stable Diffusion. To fix this, the authors propose:
<Tip warning={true}>
🧪 This is an experimental feature!
</Tip>
1. rescale the noise schedule to enforce zero terminal signal-to-noise ratio (SNR)
```py | 260_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim/#tips | .md | </Tip>
1. rescale the noise schedule to enforce zero terminal signal-to-noise ratio (SNR)
```py
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, rescale_betas_zero_snr=True)
``` | 260_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim/#tips | .md | ```py
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, rescale_betas_zero_snr=True)
```
2. train a model with `v_prediction` (add the following argument to the [train_text_to_image.py](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) or [train_text_to_image_lora.py](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py) scripts)
```bash
--prediction_type="v_prediction"
``` | 260_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim/#tips | .md | ```bash
--prediction_type="v_prediction"
```
3. change the sampler to always start from the last timestep
```py
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")
```
4. rescale classifier-free guidance to prevent over-exposure
```py
image = pipe(prompt, guidance_rescale=0.7).images[0]
```
For example:
```py
from diffusers import DiffusionPipeline, DDIMScheduler
import torch | 260_2_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim/#tips | .md | pipe = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", torch_dtype=torch.float16)
pipe.scheduler = DDIMScheduler.from_config(
pipe.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing"
)
pipe.to("cuda")
prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k"
image = pipe(prompt, guidance_rescale=0.7).images[0]
image
``` | 260_2_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim/#ddimscheduler | .md | DDIMScheduler
`DDIMScheduler` extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with
non-Markovian guidance.
This model inherits from [`SchedulerMixin`] and [`ConfigMixin`]. Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.
Args:
num_train_timesteps (`int`, defaults to 1000):
The number of diffusion steps to train the model.
beta_start (`float`, defaults to 0.0001): | 260_3_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim/#ddimscheduler | .md | The number of diffusion steps to train the model.
beta_start (`float`, defaults to 0.0001):
The starting `beta` value of inference.
beta_end (`float`, defaults to 0.02):
The final `beta` value.
beta_schedule (`str`, defaults to `"linear"`):
The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
`linear`, `scaled_linear`, or `squaredcos_cap_v2`.
trained_betas (`np.ndarray`, *optional*): | 260_3_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim/#ddimscheduler | .md | `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
trained_betas (`np.ndarray`, *optional*):
Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
clip_sample (`bool`, defaults to `True`):
Clip the predicted sample for numerical stability.
clip_sample_range (`float`, defaults to 1.0):
The maximum magnitude for sample clipping. Valid only when `clip_sample=True`.
set_alpha_to_one (`bool`, defaults to `True`): | 260_3_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim/#ddimscheduler | .md | The maximum magnitude for sample clipping. Valid only when `clip_sample=True`.
set_alpha_to_one (`bool`, defaults to `True`):
Each diffusion step uses the alphas product value at that step and at the previous one. For the final step
there is no previous alpha. When this option is `True` the previous alpha product is fixed to `1`,
otherwise it uses the alpha value at step 0.
steps_offset (`int`, defaults to 0):
An offset added to the inference steps, as required by some model families. | 260_3_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim/#ddimscheduler | .md | steps_offset (`int`, defaults to 0):
An offset added to the inference steps, as required by some model families.
prediction_type (`str`, defaults to `epsilon`, *optional*):
Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
`sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
Video](https://imagen.research.google/video/paper.pdf) paper).
thresholding (`bool`, defaults to `False`): | 260_3_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim/#ddimscheduler | .md | Video](https://imagen.research.google/video/paper.pdf) paper).
thresholding (`bool`, defaults to `False`):
Whether to use the "dynamic thresholding" method. This is unsuitable for latent-space diffusion models such
as Stable Diffusion.
dynamic_thresholding_ratio (`float`, defaults to 0.995):
The ratio for the dynamic thresholding method. Valid only when `thresholding=True`.
sample_max_value (`float`, defaults to 1.0):
The threshold value for dynamic thresholding. Valid only when `thresholding=True`. | 260_3_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim/#ddimscheduler | .md | sample_max_value (`float`, defaults to 1.0):
The threshold value for dynamic thresholding. Valid only when `thresholding=True`.
timestep_spacing (`str`, defaults to `"leading"`):
The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
rescale_betas_zero_snr (`bool`, defaults to `False`): | 260_3_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim/#ddimscheduler | .md | rescale_betas_zero_snr (`bool`, defaults to `False`):
Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and
dark samples instead of limiting it to samples with medium brightness. Loosely related to
[`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506). | 260_3_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim/#ddimscheduleroutput | .md | DDIMSchedulerOutput
Output class for the scheduler's `step` function output.
Args:
prev_sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
denoising loop.
pred_original_sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
The predicted denoised sample `(x_{0})` based on the model output from the current timestep. | 260_4_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim/#ddimscheduleroutput | .md | The predicted denoised sample `(x_{0})` based on the model output from the current timestep.
`pred_original_sample` can be used to preview progress or for guidance. | 260_4_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/lms_discrete.md | https://huggingface.co/docs/diffusers/en/api/schedulers/lms_discrete/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 261_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/lms_discrete.md | https://huggingface.co/docs/diffusers/en/api/schedulers/lms_discrete/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 261_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/lms_discrete.md | https://huggingface.co/docs/diffusers/en/api/schedulers/lms_discrete/#lmsdiscretescheduler | .md | `LMSDiscreteScheduler` is a linear multistep scheduler for discrete beta schedules. The scheduler is ported from and created by [Katherine Crowson](https://github.com/crowsonkb/), and the original implementation can be found at [crowsonkb/k-diffusion](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L181). | 261_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/lms_discrete.md | https://huggingface.co/docs/diffusers/en/api/schedulers/lms_discrete/#lmsdiscretescheduler | .md | LMSDiscreteScheduler
A linear multistep scheduler for discrete beta schedules.
This model inherits from [`SchedulerMixin`] and [`ConfigMixin`]. Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.
Args:
num_train_timesteps (`int`, defaults to 1000):
The number of diffusion steps to train the model.
beta_start (`float`, defaults to 0.0001):
The starting `beta` value of inference.
beta_end (`float`, defaults to 0.02): | 261_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/lms_discrete.md | https://huggingface.co/docs/diffusers/en/api/schedulers/lms_discrete/#lmsdiscretescheduler | .md | beta_start (`float`, defaults to 0.0001):
The starting `beta` value of inference.
beta_end (`float`, defaults to 0.02):
The final `beta` value.
beta_schedule (`str`, defaults to `"linear"`):
The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
`linear` or `scaled_linear`.
trained_betas (`np.ndarray`, *optional*):
Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`. | 261_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/lms_discrete.md | https://huggingface.co/docs/diffusers/en/api/schedulers/lms_discrete/#lmsdiscretescheduler | .md | Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
use_karras_sigmas (`bool`, *optional*, defaults to `False`):
Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If `True`,
the sigmas are determined according to a sequence of noise levels {σi}.
use_exponential_sigmas (`bool`, *optional*, defaults to `False`):
Whether to use exponential sigmas for step sizes in the noise schedule during the sampling process. | 261_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/lms_discrete.md | https://huggingface.co/docs/diffusers/en/api/schedulers/lms_discrete/#lmsdiscretescheduler | .md | Whether to use exponential sigmas for step sizes in the noise schedule during the sampling process.
use_beta_sigmas (`bool`, *optional*, defaults to `False`):
Whether to use beta sigmas for step sizes in the noise schedule during the sampling process. Refer to [Beta
Sampling is All You Need](https://huggingface.co/papers/2407.12173) for more information.
prediction_type (`str`, defaults to `epsilon`, *optional*): | 261_2_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/lms_discrete.md | https://huggingface.co/docs/diffusers/en/api/schedulers/lms_discrete/#lmsdiscretescheduler | .md | prediction_type (`str`, defaults to `epsilon`, *optional*):
Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
`sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
Video](https://imagen.research.google/video/paper.pdf) paper).
timestep_spacing (`str`, defaults to `"linspace"`):
The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and | 261_2_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/lms_discrete.md | https://huggingface.co/docs/diffusers/en/api/schedulers/lms_discrete/#lmsdiscretescheduler | .md | The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
steps_offset (`int`, defaults to 0):
An offset added to the inference steps, as required by some model families. | 261_2_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/lms_discrete.md | https://huggingface.co/docs/diffusers/en/api/schedulers/lms_discrete/#lmsdiscretescheduleroutput | .md | LMSDiscreteSchedulerOutput
Output class for the scheduler's `step` function output.
Args:
prev_sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
denoising loop.
pred_original_sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
The predicted denoised sample `(x_{0})` based on the model output from the current timestep. | 261_3_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/lms_discrete.md | https://huggingface.co/docs/diffusers/en/api/schedulers/lms_discrete/#lmsdiscretescheduleroutput | .md | The predicted denoised sample `(x_{0})` based on the model output from the current timestep.
`pred_original_sample` can be used to preview progress or for guidance. | 261_3_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/cm_stochastic_iterative.md | https://huggingface.co/docs/diffusers/en/api/schedulers/cm_stochastic_iterative/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 262_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/cm_stochastic_iterative.md | https://huggingface.co/docs/diffusers/en/api/schedulers/cm_stochastic_iterative/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 262_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/cm_stochastic_iterative.md | https://huggingface.co/docs/diffusers/en/api/schedulers/cm_stochastic_iterative/#cmstochasticiterativescheduler | .md | [Consistency Models](https://huggingface.co/papers/2303.01469) by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever introduced a multistep and onestep scheduler (Algorithm 1) that is capable of generating good samples in one or a small number of steps.
The abstract from the paper is: | 262_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/cm_stochastic_iterative.md | https://huggingface.co/docs/diffusers/en/api/schedulers/cm_stochastic_iterative/#cmstochasticiterativescheduler | .md | *Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image | 262_1_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/cm_stochastic_iterative.md | https://huggingface.co/docs/diffusers/en/api/schedulers/cm_stochastic_iterative/#cmstochasticiterativescheduler | .md | still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, | 262_1_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/cm_stochastic_iterative.md | https://huggingface.co/docs/diffusers/en/api/schedulers/cm_stochastic_iterative/#cmstochasticiterativescheduler | .md | we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256.* | 262_1_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/cm_stochastic_iterative.md | https://huggingface.co/docs/diffusers/en/api/schedulers/cm_stochastic_iterative/#cmstochasticiterativescheduler | .md | The original codebase can be found at [openai/consistency_models](https://github.com/openai/consistency_models). | 262_1_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/cm_stochastic_iterative.md | https://huggingface.co/docs/diffusers/en/api/schedulers/cm_stochastic_iterative/#cmstochasticiterativescheduler | .md | CMStochasticIterativeScheduler
Multistep and onestep sampling for consistency models.
This model inherits from [`SchedulerMixin`] and [`ConfigMixin`]. Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.
Args:
num_train_timesteps (`int`, defaults to 40):
The number of diffusion steps to train the model.
sigma_min (`float`, defaults to 0.002): | 262_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/cm_stochastic_iterative.md | https://huggingface.co/docs/diffusers/en/api/schedulers/cm_stochastic_iterative/#cmstochasticiterativescheduler | .md | The number of diffusion steps to train the model.
sigma_min (`float`, defaults to 0.002):
Minimum noise magnitude in the sigma schedule. Defaults to 0.002 from the original implementation.
sigma_max (`float`, defaults to 80.0):
Maximum noise magnitude in the sigma schedule. Defaults to 80.0 from the original implementation.
sigma_data (`float`, defaults to 0.5):
The standard deviation of the data distribution from the EDM | 262_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/cm_stochastic_iterative.md | https://huggingface.co/docs/diffusers/en/api/schedulers/cm_stochastic_iterative/#cmstochasticiterativescheduler | .md | sigma_data (`float`, defaults to 0.5):
The standard deviation of the data distribution from the EDM
[paper](https://huggingface.co/papers/2206.00364). Defaults to 0.5 from the original implementation.
s_noise (`float`, defaults to 1.0):
The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000,
1.011]. Defaults to 1.0 from the original implementation.
rho (`float`, defaults to 7.0):
The parameter for calculating the Karras sigma schedule from the EDM | 262_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/cm_stochastic_iterative.md | https://huggingface.co/docs/diffusers/en/api/schedulers/cm_stochastic_iterative/#cmstochasticiterativescheduler | .md | rho (`float`, defaults to 7.0):
The parameter for calculating the Karras sigma schedule from the EDM
[paper](https://huggingface.co/papers/2206.00364). Defaults to 7.0 from the original implementation.
clip_denoised (`bool`, defaults to `True`):
Whether to clip the denoised outputs to `(-1, 1)`.
timesteps (`List` or `np.ndarray` or `torch.Tensor`, *optional*):
An explicit timestep schedule that can be optionally specified. The timesteps are expected to be in
increasing order. | 262_2_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/cm_stochastic_iterative.md | https://huggingface.co/docs/diffusers/en/api/schedulers/cm_stochastic_iterative/#cmstochasticiterativescheduleroutput | .md | CMStochasticIterativeSchedulerOutput
Output class for the scheduler's `step` function.
Args:
prev_sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
denoising loop. | 262_3_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/consistency_decoder.md | https://huggingface.co/docs/diffusers/en/api/schedulers/consistency_decoder/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 263_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/consistency_decoder.md | https://huggingface.co/docs/diffusers/en/api/schedulers/consistency_decoder/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 263_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/consistency_decoder.md | https://huggingface.co/docs/diffusers/en/api/schedulers/consistency_decoder/#consistencydecoderscheduler | .md | This scheduler is a part of the [`ConsistencyDecoderPipeline`] and was introduced in [DALL-E 3](https://openai.com/dall-e-3).
The original codebase can be found at [openai/consistency_models](https://github.com/openai/consistency_models). | 263_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/consistency_decoder.md | https://huggingface.co/docs/diffusers/en/api/schedulers/consistency_decoder/#consistencydecoderscheduler | .md | ConsistencyDecoderScheduler | 263_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim_inverse.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim_inverse/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 264_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim_inverse.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim_inverse/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 264_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim_inverse.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim_inverse/#ddiminversescheduler | .md | `DDIMInverseScheduler` is the inverted scheduler from [Denoising Diffusion Implicit Models](https://huggingface.co/papers/2010.02502) (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon.
The implementation is mostly based on the DDIM inversion definition from [Null-text Inversion for Editing Real Images using Guided Diffusion Models](https://huggingface.co/papers/2211.09794). | 264_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim_inverse.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim_inverse/#ddiminversescheduler | .md | DDIMInverseScheduler
`DDIMInverseScheduler` is the reverse scheduler of [`DDIMScheduler`].
This model inherits from [`SchedulerMixin`] and [`ConfigMixin`]. Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.
Args:
num_train_timesteps (`int`, defaults to 1000):
The number of diffusion steps to train the model.
beta_start (`float`, defaults to 0.0001):
The starting `beta` value of inference. | 264_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim_inverse.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim_inverse/#ddiminversescheduler | .md | beta_start (`float`, defaults to 0.0001):
The starting `beta` value of inference.
beta_end (`float`, defaults to 0.02):
The final `beta` value.
beta_schedule (`str`, defaults to `"linear"`):
The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
`linear`, `scaled_linear`, or `squaredcos_cap_v2`.
trained_betas (`np.ndarray`, *optional*):
Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`. | 264_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim_inverse.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim_inverse/#ddiminversescheduler | .md | Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
clip_sample (`bool`, defaults to `True`):
Clip the predicted sample for numerical stability.
clip_sample_range (`float`, defaults to 1.0):
The maximum magnitude for sample clipping. Valid only when `clip_sample=True`.
set_alpha_to_one (`bool`, defaults to `True`):
Each diffusion step uses the alphas product value at that step and at the previous one. For the final step | 264_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim_inverse.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim_inverse/#ddiminversescheduler | .md | Each diffusion step uses the alphas product value at that step and at the previous one. For the final step
there is no previous alpha. When this option is `True` the previous alpha product is fixed to 0, otherwise
it uses the alpha value at step `num_train_timesteps - 1`.
steps_offset (`int`, defaults to 0):
An offset added to the inference steps, as required by some model families.
prediction_type (`str`, defaults to `epsilon`, *optional*): | 264_2_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim_inverse.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim_inverse/#ddiminversescheduler | .md | prediction_type (`str`, defaults to `epsilon`, *optional*):
Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
`sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
Video](https://imagen.research.google/video/paper.pdf) paper).
timestep_spacing (`str`, defaults to `"leading"`):
The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and | 264_2_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim_inverse.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim_inverse/#ddiminversescheduler | .md | The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
rescale_betas_zero_snr (`bool`, defaults to `False`):
Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and
dark samples instead of limiting it to samples with medium brightness. Loosely related to | 264_2_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/ddim_inverse.md | https://huggingface.co/docs/diffusers/en/api/schedulers/ddim_inverse/#ddiminversescheduler | .md | dark samples instead of limiting it to samples with medium brightness. Loosely related to
[`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506). | 264_2_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/euler_ancestral.md | https://huggingface.co/docs/diffusers/en/api/schedulers/euler_ancestral/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 265_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/euler_ancestral.md | https://huggingface.co/docs/diffusers/en/api/schedulers/euler_ancestral/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 265_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/euler_ancestral.md | https://huggingface.co/docs/diffusers/en/api/schedulers/euler_ancestral/#eulerancestraldiscretescheduler | .md | A scheduler that uses ancestral sampling with Euler method steps. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original [k-diffusion](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72) implementation by [Katherine Crowson](https://github.com/crowsonkb/). | 265_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/euler_ancestral.md | https://huggingface.co/docs/diffusers/en/api/schedulers/euler_ancestral/#eulerancestraldiscretescheduler | .md | EulerAncestralDiscreteScheduler
Ancestral sampling with Euler method steps.
This model inherits from [`SchedulerMixin`] and [`ConfigMixin`]. Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.
Args:
num_train_timesteps (`int`, defaults to 1000):
The number of diffusion steps to train the model.
beta_start (`float`, defaults to 0.0001):
The starting `beta` value of inference.
beta_end (`float`, defaults to 0.02): | 265_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/euler_ancestral.md | https://huggingface.co/docs/diffusers/en/api/schedulers/euler_ancestral/#eulerancestraldiscretescheduler | .md | beta_start (`float`, defaults to 0.0001):
The starting `beta` value of inference.
beta_end (`float`, defaults to 0.02):
The final `beta` value.
beta_schedule (`str`, defaults to `"linear"`):
The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
`linear` or `scaled_linear`.
trained_betas (`np.ndarray`, *optional*):
Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`. | 265_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/euler_ancestral.md | https://huggingface.co/docs/diffusers/en/api/schedulers/euler_ancestral/#eulerancestraldiscretescheduler | .md | Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`.
prediction_type (`str`, defaults to `epsilon`, *optional*):
Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process),
`sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen
Video](https://imagen.research.google/video/paper.pdf) paper).
timestep_spacing (`str`, defaults to `"linspace"`): | 265_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/euler_ancestral.md | https://huggingface.co/docs/diffusers/en/api/schedulers/euler_ancestral/#eulerancestraldiscretescheduler | .md | Video](https://imagen.research.google/video/paper.pdf) paper).
timestep_spacing (`str`, defaults to `"linspace"`):
The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and
Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information.
steps_offset (`int`, defaults to 0):
An offset added to the inference steps, as required by some model families.
rescale_betas_zero_snr (`bool`, defaults to `False`): | 265_2_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/euler_ancestral.md | https://huggingface.co/docs/diffusers/en/api/schedulers/euler_ancestral/#eulerancestraldiscretescheduler | .md | rescale_betas_zero_snr (`bool`, defaults to `False`):
Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and
dark samples instead of limiting it to samples with medium brightness. Loosely related to
[`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506). | 265_2_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/euler_ancestral.md | https://huggingface.co/docs/diffusers/en/api/schedulers/euler_ancestral/#eulerancestraldiscretescheduleroutput | .md | EulerAncestralDiscreteSchedulerOutput
Output class for the scheduler's `step` function output.
Args:
prev_sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the
denoising loop.
pred_original_sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images): | 265_3_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/euler_ancestral.md | https://huggingface.co/docs/diffusers/en/api/schedulers/euler_ancestral/#eulerancestraldiscretescheduleroutput | .md | denoising loop.
pred_original_sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
The predicted denoised sample `(x_{0})` based on the model output from the current timestep.
`pred_original_sample` can be used to preview progress or for guidance. | 265_3_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/score_sde_ve.md | https://huggingface.co/docs/diffusers/en/api/schedulers/score_sde_ve/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 266_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/score_sde_ve.md | https://huggingface.co/docs/diffusers/en/api/schedulers/score_sde_ve/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 266_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/score_sde_ve.md | https://huggingface.co/docs/diffusers/en/api/schedulers/score_sde_ve/#scoresdevescheduler | .md | `ScoreSdeVeScheduler` is a variance exploding stochastic differential equation (SDE) scheduler. It was introduced in the [Score-Based Generative Modeling through Stochastic Differential Equations](https://huggingface.co/papers/2011.13456) paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole.
The abstract from the paper is: | 266_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/score_sde_ve.md | https://huggingface.co/docs/diffusers/en/api/schedulers/score_sde_ve/#scoresdevescheduler | .md | *Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data | 266_1_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/score_sde_ve.md | https://huggingface.co/docs/diffusers/en/api/schedulers/score_sde_ve/#scoresdevescheduler | .md | noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling | 266_1_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/score_sde_ve.md | https://huggingface.co/docs/diffusers/en/api/schedulers/score_sde_ve/#scoresdevescheduler | .md | in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse | 266_1_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/score_sde_ve.md | https://huggingface.co/docs/diffusers/en/api/schedulers/score_sde_ve/#scoresdevescheduler | .md | enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high | 266_1_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/score_sde_ve.md | https://huggingface.co/docs/diffusers/en/api/schedulers/score_sde_ve/#scoresdevescheduler | .md | on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.* | 266_1_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/score_sde_ve.md | https://huggingface.co/docs/diffusers/en/api/schedulers/score_sde_ve/#scoresdevescheduler | .md | ScoreSdeVeScheduler
`ScoreSdeVeScheduler` is a variance exploding stochastic differential equation (SDE) scheduler.
This model inherits from [`SchedulerMixin`] and [`ConfigMixin`]. Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.
Args:
num_train_timesteps (`int`, defaults to 1000):
The number of diffusion steps to train the model.
snr (`float`, defaults to 0.15): | 266_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/api/schedulers/score_sde_ve.md | https://huggingface.co/docs/diffusers/en/api/schedulers/score_sde_ve/#scoresdevescheduler | .md | The number of diffusion steps to train the model.
snr (`float`, defaults to 0.15):
A coefficient weighting the step from the `model_output` sample (from the network) to the random noise.
sigma_min (`float`, defaults to 0.01):
The initial noise scale for the sigma sequence in the sampling procedure. The minimum sigma should mirror
the distribution of the data.
sigma_max (`float`, defaults to 1348.0):
The maximum value used for the range of continuous timesteps passed into the model. | 266_2_1 |
Subsets and Splits