Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines.
( scheduler: FlowMatchEulerDiscreteScheduler vae: AutoencoderKLQwenImage text_encoder: Qwen2_5_VLForConditionalGeneration tokenizer: Qwen2Tokenizer transformer: QwenImageTransformer2DModel )
Parameters
transformer to denoise the encoded image latents. Qwen2.5-VL-7B-Instruct) —
Qwen2.5-VL-7B-Instruct, specifically the
Qwen2.5-VL-7B-Instruct variant. QwenTokenizer) —
Tokenizer of class
CLIPTokenizer. The QwenImage pipeline for text-to-image generation.
( prompt: typing.Union[str, typing.List[str]] = None negative_prompt: typing.Union[str, typing.List[str]] = None true_cfg_scale: float = 4.0 height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 sigmas: typing.Optional[typing.List[float]] = None guidance_scale: float = 1.0 num_images_per_prompt: int = 1 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.Tensor] = None prompt_embeds: typing.Optional[torch.Tensor] = None prompt_embeds_mask: typing.Optional[torch.Tensor] = None negative_prompt_embeds: typing.Optional[torch.Tensor] = None negative_prompt_embeds_mask: typing.Optional[torch.Tensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None callback_on_step_end: typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None callback_on_step_end_tensor_inputs: typing.List[str] = ['latents'] max_sequence_length: int = 512 ) → ~pipelines.qwenimage.QwenImagePipelineOutput or tuple
Parameters
str or List[str], optional) —
The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds.
instead. str or List[str], optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if true_cfg_scale is
not greater than 1). float, optional, defaults to 1.0) —
When > 1.0 and a provided negative_prompt, enables true classifier-free guidance. int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) —
The height in pixels of the generated image. This is set to 1024 by default for the best results. int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) —
The width in pixels of the generated image. This is set to 1024 by default for the best results. int, optional, defaults to 50) —
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. List[float], optional) —
Custom sigmas to use for the denoising process with schedulers which support a sigmas argument in
their set_timesteps method. If not defined, the default behavior when num_inference_steps is passed
will be used. float, optional, defaults to 3.5) —
Guidance scale as defined in Classifier-Free Diffusion
Guidance. guidance_scale is defined as w of equation 2.
of Imagen Paper. Guidance scale is enabled by setting
guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to
the text prompt, usually at the expense of lower image quality. int, optional, defaults to 1) —
The number of images to generate per prompt. torch.Generator or List[torch.Generator], optional) —
One or a list of torch generator(s)
to make generation deterministic. torch.Tensor, optional) —
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
tensor will be generated by sampling using the supplied random generator. torch.Tensor, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt input argument. torch.Tensor, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input
argument. str, optional, defaults to "pil") —
The output format of the generate image. Choose between
PIL: PIL.Image.Image or np.array. bool, optional, defaults to True) —
Whether or not to return a ~pipelines.qwenimage.QwenImagePipelineOutput instead of a plain tuple. dict, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under
self.processor in
diffusers.models.attention_processor. Callable, optional) —
A function that calls at the end of each denoising steps during the inference. The function is called
with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by
callback_on_step_end_tensor_inputs. List, optional) —
The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list
will be passed as callback_kwargs argument. You will only be able to include variables listed in the
._callback_tensor_inputs attribute of your pipeline class. int defaults to 512) — Maximum sequence length to use with the prompt. Returns
~pipelines.qwenimage.QwenImagePipelineOutput or tuple
~pipelines.qwenimage.QwenImagePipelineOutput if return_dict is True, otherwise a tuple. When
returning a tuple, the first element is a list with the generated images.
Function invoked when calling the pipeline for generation.
Examples:
>>> import torch
>>> from diffusers import QwenImagePipeline
>>> pipe = QwenImagePipeline.from_pretrained("Qwen/QwenImage-20B", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")
>>> prompt = "A cat holding a sign that says hello world"
>>> # Depending on the variant being used, the pipeline call will slightly vary.
>>> # Refer to the pipeline documentation for more details.
>>> image = pipe(prompt, num_inference_steps=4, guidance_scale=0.0).images[0]
>>> image.save("qwenimage.png")Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to
computing decoding in one step.
Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to
computing decoding in one step.
Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.
( prompt: typing.Union[str, typing.List[str]] device: typing.Optional[torch.device] = None num_images_per_prompt: int = 1 prompt_embeds: typing.Optional[torch.Tensor] = None prompt_embeds_mask: typing.Optional[torch.Tensor] = None max_sequence_length: int = 1024 )
Parameters
str or List[str], optional) —
prompt to be encoded torch.device):
torch device int) —
number of images that should be generated per prompt torch.Tensor, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt input argument. ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] )
Output class for Stable Diffusion pipelines.