Papers
arxiv:2508.20586

FastFit: Accelerating Multi-Reference Virtual Try-On via Cacheable Diffusion Models

Published on Aug 28
· Submitted by zhengchong on Sep 3
Authors:
,
,
,
,
,
,
,
,

Abstract

FastFit, a high-speed virtual try-on framework using a cacheable diffusion architecture with a Semi-Attention mechanism, achieves significant speedup and maintains high fidelity in multi-reference outfit compositions.

AI-generated summary

Despite its great potential, virtual try-on technology is hindered from real-world application by two major challenges: the inability of current methods to support multi-reference outfit compositions (including garments and accessories), and their significant inefficiency caused by the redundant re-computation of reference features in each denoising step. To address these challenges, we propose FastFit, a high-speed multi-reference virtual try-on framework based on a novel cacheable diffusion architecture. By employing a Semi-Attention mechanism and substituting traditional timestep embeddings with class embeddings for reference items, our model fully decouples reference feature encoding from the denoising process with negligible parameter overhead. This allows reference features to be computed only once and losslessly reused across all steps, fundamentally breaking the efficiency bottleneck and achieving an average 3.5x speedup over comparable methods. Furthermore, to facilitate research on complex, multi-reference virtual try-on, we introduce DressCode-MR, a new large-scale dataset. It comprises 28,179 sets of high-quality, paired images covering five key categories (tops, bottoms, dresses, shoes, and bags), constructed through a pipeline of expert models and human feedback refinement. Extensive experiments on the VITON-HD, DressCode, and our DressCode-MR datasets show that FastFit surpasses state-of-the-art methods on key fidelity metrics while offering its significant advantage in inference efficiency.

Community

Paper author Paper submitter

Boost multi-reference virtual try-on speed by 3.5x with a novel Cacheable Diffusion Model, all while achieving state-of-the-art image quality.

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2508.20586 in a Space README.md to link it from this page.

Collections including this paper 1