Bria 3.2 is the next-generation commercial-ready text-to-image model. With just 4 billion parameters, it provides exceptional aesthetics and text rendering, evaluated to provide on par results to leading open-source models, and outperforming other licensed models. In addition to being built entirely on licensed data, 3.2 provides several advantages for enterprise and commercial use:
Original model checkpoints for Bria 3.2 can be found here. Github repo for Bria 3.2 can be found here.
If you want to learn more about the Bria platform, and get free traril access, please visit bria.ai.
As the model is gated, before using it with diffusers you first need to go to the Bria 3.2 Hugging Face page, fill in the form and accept the gate. Once you are in, you need to login so that your system knows you’ve accepted the gate.
Use the command below to log in:
hf auth login
( transformer: BriaTransformer2DModel scheduler: typing.Union[diffusers.schedulers.scheduling_flow_match_euler_discrete.FlowMatchEulerDiscreteScheduler, diffusers.schedulers.scheduling_utils.KarrasDiffusionSchedulers] vae: AutoencoderKL text_encoder: T5EncoderModel tokenizer: T5TokenizerFast image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None )
Parameters
transformer to denoise the encoded image latents. T5EncoderModel) —
Frozen text-encoder. Bria uses
T5, specifically the
t5-v1_1-xxl variant. T5TokenizerFast) —
Tokenizer of class
T5Tokenizer. Based on FluxPipeline with several changes:
( prompt: typing.Union[str, typing.List[str]] = None height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 30 timesteps: typing.List[int] = None guidance_scale: float = 5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None callback_on_step_end: typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None callback_on_step_end_tensor_inputs: typing.List[str] = ['latents'] max_sequence_length: int = 128 clip_value: typing.Optional[float] = None normalize: bool = False ) → ~pipelines.bria.BriaPipelineOutput or tuple
Parameters
str or List[str], optional) —
The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds.
instead. int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) —
The height in pixels of the generated image. This is set to 1024 by default for the best results. int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) —
The width in pixels of the generated image. This is set to 1024 by default for the best results. int, optional, defaults to 50) —
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. List[int], optional) —
Custom timesteps to use for the denoising process with schedulers which support a timesteps argument
in their set_timesteps method. If not defined, the default behavior when num_inference_steps is
passed will be used. Must be in descending order. float, optional, defaults to 5.0) —
Guidance scale as defined in Classifier-Free Diffusion Guidance.
guidance_scale is defined as w of equation 2. of Imagen
Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt,
usually at the expense of lower image quality. str or List[str], optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is
less than 1). int, optional, defaults to 1) —
The number of images to generate per prompt. torch.Generator or List[torch.Generator], optional) —
One or a list of torch generator(s)
to make generation deterministic. torch.FloatTensor, optional) —
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
tensor will ge generated by sampling using the supplied random generator. torch.FloatTensor, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt input argument. torch.FloatTensor, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input
argument. str, optional, defaults to "pil") —
The output format of the generate image. Choose between
PIL: PIL.Image.Image or np.array. bool, optional, defaults to True) —
Whether or not to return a ~pipelines.bria.BriaPipelineOutput instead of a plain tuple. dict, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under
self.processor in
diffusers.models.attention_processor. Callable, optional) —
A function that calls at the end of each denoising steps during the inference. The function is called
with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by
callback_on_step_end_tensor_inputs. List, optional) —
The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list
will be passed as callback_kwargs argument. You will only be able to include variables listed in the
._callback_tensor_inputs attribute of your pipeline class. int defaults to 256) — Maximum sequence length to use with the prompt. Returns
~pipelines.bria.BriaPipelineOutput or tuple
~pipelines.bria.BriaPipelineOutput if return_dict
is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated
images.
Function invoked when calling the pipeline for generation.
Examples:
>>> import torch
>>> from diffusers import BriaPipeline
>>> pipe = BriaPipeline.from_pretrained("briaai/BRIA-3.2", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")
# BRIA's T5 text encoder is sensitive to precision. We need to cast it to bfloat16 and keep the final layer in float32.
>>> pipe.text_encoder = pipe.text_encoder.to(dtype=torch.bfloat16)
>>> for block in pipe.text_encoder.encoder.block:
... block.layer[-1].DenseReluDense.wo.to(dtype=torch.float32)
# BRIA's VAE is not supported in mixed precision, so we use float32.
>>> if pipe.vae.config.shift_factor == 0:
... pipe.vae.to(dtype=torch.float32)
>>> prompt = "Photorealistic food photography of a stack of fluffy pancakes on a white plate, with maple syrup being poured over them. On top of the pancakes are the words 'BRIA 3.2' in bold, yellow, 3D letters. The background is dark and out of focus."
>>> image = pipe(prompt).images[0]
>>> image.save("bria.png")( prompt: typing.Union[str, typing.List[str]] device: typing.Optional[torch.device] = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: typing.Union[str, typing.List[str], NoneType] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None max_sequence_length: int = 128 lora_scale: typing.Optional[float] = None )
Parameters
str or List[str], optional) —
prompt to be encoded torch.device):
torch device int) —
number of images that should be generated per prompt bool) —
whether to use classifier free guidance or not str or List[str], optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is
less than 1). torch.FloatTensor, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt input argument. torch.FloatTensor, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input
argument.