The model can be loaded with the following code snippet.
from diffusers import QwenImageTransformer2DModel
transformer = QwenImageTransformer2DModel.from_pretrained("Qwen/QwenImage-20B", subfolder="transformer", torch_dtype=torch.bfloat16)( patch_size: int = 2 in_channels: int = 64 out_channels: typing.Optional[int] = 16 num_layers: int = 60 attention_head_dim: int = 128 num_attention_heads: int = 24 joint_attention_dim: int = 3584 guidance_embeds: bool = False axes_dims_rope: typing.Tuple[int, int, int] = (16, 56, 56) )
Parameters
int, defaults to 2) —
Patch size to turn the input data into small patches. int, defaults to 64) —
The number of channels in the input. int, optional, defaults to None) —
The number of channels in the output. If not specified, it defaults to in_channels. int, defaults to 60) —
The number of layers of dual stream DiT blocks to use. int, defaults to 128) —
The number of dimensions to use for each attention head. int, defaults to 24) —
The number of attention heads to use. int, defaults to 3584) —
The number of dimensions to use for the joint attention (embedding/channel dimension of
encoder_hidden_states). bool, defaults to False) —
Whether to use guidance embeddings for guidance-distilled variant of the model. Tuple[int], defaults to (16, 56, 56)) —
The dimensions to use for the rotary positional embeddings. The Transformer model introduced in Qwen.
( hidden_states: Tensor encoder_hidden_states: Tensor = None encoder_hidden_states_mask: Tensor = None timestep: LongTensor = None img_shapes: typing.Optional[typing.List[typing.Tuple[int, int, int]]] = None txt_seq_lens: typing.Optional[typing.List[int]] = None guidance: Tensor = None attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None return_dict: bool = True )
Parameters
torch.Tensor of shape (batch_size, image_sequence_length, in_channels)) —
Input hidden_states. torch.Tensor of shape (batch_size, text_sequence_length, joint_attention_dim)) —
Conditional embeddings (embeddings computed from the input conditions such as prompts) to use. torch.Tensor of shape (batch_size, text_sequence_length)) —
Mask of the input conditions. torch.LongTensor) —
Used to indicate denoising step. dict, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under
self.processor in
diffusers.models.attention_processor. bool, optional, defaults to True) —
Whether or not to return a ~models.transformer_2d.Transformer2DModelOutput instead of a plain
tuple. The QwenTransformer2DModel forward method.
( sample: torch.Tensor )
Parameters
torch.Tensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) —
The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability
distributions for the unnoised latent pixels. The output of Transformer2DModel.