Datasets:
ArXiv:
License:
license: other | |
extra_gated_prompt: >- | |
This dataset is available **exclusively for non-commercial academic research**. | |
You must confirm that you already have access to the original DressCode dataset, and use an **institutional email** to request access. | |
By requesting access, you agree NOT to use this dataset or its content for any commercial purposes, and NOT to redistribute any part of it publicly or privately. | |
extra_gated_fields: | |
Name: text | |
Country: country | |
Affiliation: text | |
Email (Education Email Only): text | |
Have you been granted access to the original DressCode dataset?: checkbox | |
I agree to use this dataset for non-commercial research purposes ONLY and not to share it publicly or privately with others: checkbox | |
viewer: false | |
Okay, I can help with that. Here is the updated README with the provided links, formatted to be left-aligned. | |
----- | |
# DressCode-MR: A Large-Scale Multi-Reference Virtual Try-On Dataset | |
Supported by [LavieAI](https://lavieai.com/) and [LoomlyAI](https://www.loomlyai.com/en) | |
[arXiv](https://arxiv.org/abs/2508.20586) | [Hugging Face](https://huggingface.co/zhengchong/FastFit-MR-1024) | [GitHub](https://github.com/Zheng-Chong/FastFit) | [Demo](https://fastfit.lavieai.com) | [License](https://github.com/Zheng-Chong/FastFit/tree/main) | |
**DressCode-MR** is a large-scale, multi-reference virtual try-on dataset constructed upon the original [DressCode](https://github.com/aimagelab/dress-code) dataset. It contains over **28,000 multi-reference virtual try-on samples** designed to facilitate and evaluate virtual try-on models capable of handling multiple fashion items—such as tops, bottoms, dresses, shoes, and bags—simultaneously. | |
<div align="center"> | |
<img src="dataset.png" alt="DressCode-MR Dataset" width="800"> | |
</div> | |
## Dataset Details | |
* **Multi-Reference Samples**: Each sample consists of a person's image paired with a set of compatible clothing and accessory items, enabling models to learn how to coordinate multiple fashion pieces within a single scene. | |
* **Large Scale**: The dataset includes a total of **28,179** high-quality multi-reference samples, with **25,779** designated for training and **2,400** for testing. | |
* **Source**: This dataset is built upon the [DressCode](https://github.com/aimagelab/dress-code) dataset. | |
## Access and License | |
This dataset is released under the exact same license as the original [DressCode](https://github.com/aimagelab/dress-code) dataset. Therefore, before you can request access to the **DressCode-MR** dataset, you must first complete the following steps: | |
1. Apply for and be granted a license to use the [DressCode](https://github.com/aimagelab/dress-code) dataset. | |
2. Use your educational/academic email address (e.g., one ending in `.edu`, `.ac`, etc.) to request access to the **DressCode-MR** dataset on Hugging Face. **Any requests from non-academic email addresses will be rejected.** | |
## Usage | |
After downloading the dataset, you can decompress the files using the following commands: | |
```bash | |
cat DressCode-MR.tar.gz.part_* > DressCode-MR.tar.gz | |
tar -zxvf DressCode-MR.tar.gz | |
``` | |
## Citation | |
If you use the **DressCode-MR** dataset in your research, please cite our [FastFit](https://arxiv.org/abs/2508.20586) paper. | |
```bibtex | |
@misc{chong2025fastfitacceleratingmultireferencevirtual, | |
title={FastFit: Accelerating Multi-Reference Virtual Try-On via Cacheable Diffusion Models}, | |
author={Zheng Chong and Yanwei Lei and Shiyue Zhang and Zhuandi He and Zhen Wang and Xujie Zhang and Xiao Dong and Yiling Wu and Dongmei Jiang and Xiaodan Liang}, | |
year={2025}, | |
eprint={2508.20586}, | |
archivePrefix={arXiv}, | |
primaryClass={cs.CV}, | |
url={https://arxiv.org/abs/2508.20586}, | |
} | |
``` | |
## Acknowledgement | |
We thank the contributors to the [DressCode](https://github.com/aimagelab/dress-code) project, as their work provided the foundation for our **DressCode-MR** dataset. |