Improve model card for Transition Models (TiM)
Browse filesThis PR significantly enhances the model card for the Transition Models (TiM) repository. It replaces the minimal existing content with a comprehensive overview to improve discoverability and provide essential information to users on the Hugging Face Hub.
Key updates include:
* Adding `license: apache-2.0` and `pipeline_tag: text-to-image` to the metadata for better categorization and searchability.
* Linking to the official Hugging Face paper page: [Transition Models: Rethinking the Generative Learning Objective](https://huggingface.co/papers/2509.04394).
* Providing a direct link to the official GitHub repository: [https://github.com/WZDTHU/TiM](https://github.com/WZDTHU/TiM).
* Including a summary of the paper's highlights and the architecture's key features, such as arbitrary-step generation and high-resolution output.
* Adding the detailed "Model Zoo" tables from the GitHub README, showcasing Text-to-Image and Class-guided Image Generation variants with their respective performance metrics and associated VAEs.
* Including the BibTeX citation for proper academic attribution.
These improvements will make the model more accessible, understandable, and easier for the community to engage with.
@@ -1 +1,58 @@
|
|
1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
pipeline_tag: text-to-image
|
4 |
+
---
|
5 |
+
|
6 |
+
# Transition Models: Rethinking the Generative Learning Objective
|
7 |
+
|
8 |
+
This repository contains the official implementation of **Transition Models (TiM)**, a novel generative model presented in the paper "[Transition Models: Rethinking the Generative Learning Objective](https://huggingface.co/papers/2509.04394)".
|
9 |
+
|
10 |
+
TiM addresses the dilemma in generative modeling by introducing an exact, continuous-time dynamics equation that analytically defines state transitions across any finite time interval. This enables a novel generative paradigm that adapts to arbitrary-step transitions, seamlessly traversing the generative trajectory from single leaps to fine-grained refinement with more steps.
|
11 |
+
|
12 |
+
For more detailed information, code, and usage instructions, please refer to the official [GitHub repository](https://github.com/WZDTHU/TiM).
|
13 |
+
|
14 |
+
## Highlights
|
15 |
+
|
16 |
+
* **Arbitrary-Step Generation**: TiM learns to master arbitrary state-to-state transitions, unifying few-step and many-step regimes within a single, powerful model. This approach allows it to learn the entire solution manifold of the generative process.
|
17 |
+
* **State-of-the-Art Performance**: Despite having only 865M parameters, TiM achieves state-of-the-art performance, surpassing leading models such as SD3.5 (8B parameters) and FLUX.1 (12B parameters) across all evaluated step counts on the GenEval benchmark.
|
18 |
+
* **Monotonic Quality Improvement**: Unlike previous few-step generators, TiM demonstrates consistent quality improvement as the sampling budget increases.
|
19 |
+
* **High-Resolution Fidelity**: When employing its native-resolution strategy, TiM delivers exceptional fidelity at resolutions up to 4096x4096.
|
20 |
+
|
21 |
+
<p align="center">
|
22 |
+
<img src="https://github.com/WZDTHU/TiM/raw/main/assets/illustration.png" width="800" alt="TiM Illustration">
|
23 |
+
</p>
|
24 |
+
|
25 |
+
## Model Zoo
|
26 |
+
|
27 |
+
A single TiM model can perform any-step generation (one-step, few-step, and multi-step) and demonstrate monotonic quality improvement as the sampling budget increases.
|
28 |
+
|
29 |
+
### Text-to-Image Generation
|
30 |
+
|
31 |
+
| Model | Model Size | VAE | 1-NFE GenEval | 8-NFE GenEval | 128-NFE GenEval |
|
32 |
+
|---------|------------|------------------------------------------------------------------------|---------------|---------------|-----------------|
|
33 |
+
| TiM-T2I | 865M | [DC-AE](https://huggingface.co/mit-han-lab/dc-ae-f32c32-sana-1.1-diffusers) | 0.67 | 0.76 | 0.83 |
|
34 |
+
|
35 |
+
### Class-guided Image Generation
|
36 |
+
|
37 |
+
| Model | Model Size | VAE | 2-NFE FID | 500-NFE FID |
|
38 |
+
|-----------|------------|------------------------------------------------------------------------|-----------|-------------|
|
39 |
+
| TiM-C2I-256 | 664M | [SD-VAE](https://huggingface.co/stabilityai/sd-vae-ft-ema) | 6.14 | 1.65 |
|
40 |
+
| TiM-C2I-512 | 664M | [DC-AE](https://huggingface.co/mit-han-lab/dc-ae-f32c32-sana-1.1-diffusers) | 4.79 | 1.69 |
|
41 |
+
|
42 |
+
## Citation
|
43 |
+
|
44 |
+
If you find this project useful, please kindly cite:
|
45 |
+
|
46 |
+
```bibtex
|
47 |
+
@article{wang2025transition,
|
48 |
+
title={Transition Models: Rethinking the Generative Learning Objective},
|
49 |
+
author={Wang, Zidong and Zhang, Yiyuan and Yue, Xiaoyu and Yue, Xiangyu and Li, Yangguang and Ouyang, Wanli and Bai, Lei},
|
50 |
+
year={2025},
|
51 |
+
eprint={2509.04394},
|
52 |
+
archivePrefix={arXiv},
|
53 |
+
primaryClass={cs.LG}
|
54 |
+
}
|
55 |
+
```
|
56 |
+
|
57 |
+
## License
|
58 |
+
This project is licensed under the Apache-2.0 license.
|