Datasets:
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: tgt_img
dtype: image
- name: cond_img_0
dtype: image
- name: cond_img_1
dtype: image
- name: prompt
dtype: string
- name: cond_prompt_0
dtype: string
- name: cond_prompt_1
dtype: string
splits:
- name: train
num_bytes: 27532187682.875
num_examples: 29859
download_size: 27509349685
dataset_size: 27532187682.875
task_categories:
- text-to-image
tags:
- text-to-image
- customization

⭐️ Although MUSAR is trained solely on diptych data constructed from concatenated single-subject samples, we recognize that a high-quality multi-subject paired dataset is highly beneficial for the field of image customization. To accelerate progress in this field, we are releasing the high-quality multi-subject dataset generated by MUSAR: MUSAR-Gen. It delivers FLUX-comparable image quality without exhibiting attribute entanglement issues. Hope it will be helpful to researchers working on related topics.
dataset info
Construction details: The condition images are two subjects randomly selected from the subjects200k dataset (excluding the 111,761 subjects used during the model training process). The prompt format is: "An undivided, seamless, and harmonious picture with two objects. in the xxx scene, Subject A and Subject B are placed together." By collecting the outputs of the MUSAR model, we obtained approximately 30,000 samples.
Quick Start
- Load dataset
from datasets import load_dataset # Load dataset dataset = load_dataset('guozinan/MUSAR-Gen')
Data Format
Key name | Type | Description |
---|---|---|
cond_img_0 |
image |
Reference Image Information (first image). |
cond_img_1 |
image |
Reference Image Information (second image). |
tgt_img |
image |
Multi-subject customized result generated by the MUSAR model. |
cond_prompt_0 |
str |
Textual description of the corresponding subject in cond_img_0. |
cond_prompt_1 |
str |
Textual description of the corresponding subject in cond_img_1. |
prompt |
str |
Textual description of the tgt_img content. |
Citation
If you use MUSAR-Gen dataset, please cite our paper:
@article{guo2025musar,
title={MUSAR: Exploring Multi-Subject Customization from Single-Subject Dataset via Attention Routing},
author={Guo, Zinan and Zhang, Pengze and Wu, Yanze and Mou, Chong and Zhao, Songtao and He, Qian},
journal={arXiv preprint arXiv:2505.02823},
year={2025}
}