Diffusers
Safetensors
fisherma commited on
Commit
be0729a
·
verified ·
1 Parent(s): 5a523dc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +168 -3
README.md CHANGED
@@ -1,3 +1,168 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ <p align="center">
5
+ <img src="assets/logo.png" height=100>
6
+ </p>
7
+ <div align="center">
8
+ <a href="https://yuewen.cn/videos"><img src="https://img.shields.io/static/v1?label=Step-Video&message=Web&color=green"></a> &ensp;
9
+ <a href="https://arxiv.org/abs/2502.10248"><img src="https://img.shields.io/static/v1?label=Tech Report&message=Arxiv&color=red"></a> &ensp;
10
+ <a href="https://x.com/StepFun_ai"><img src="https://img.shields.io/static/v1?label=X.com&message=Web&color=blue"></a> &ensp;
11
+ </div>
12
+
13
+ <div align="center">
14
+ <a href="https://huggingface.co/stepfun-ai/stepvideo-t2v"><img src="https://img.shields.io/static/v1?label=Step-Video-T2V&message=HuggingFace&color=yellow"></a> &ensp;
15
+ <a href="https://huggingface.co/stepfun-ai/stepvideo-t2v-turbo"><img src="https://img.shields.io/static/v1?label=Step-Video-T2V-Turbo&message=HuggingFace&color=yellow"></a> &ensp;
16
+ </div>
17
+
18
+ ## 🔥🔥🔥 News!!
19
+ * Feb 17, 2025: 👋 We release the inference code and model weights of Step-Video-T2V. [Download](https://huggingface.co/stepfun-ai/stepvideo-t2v)
20
+ * Feb 17, 2025: 👋 We release the inference code and model weights of Step-Video-T2V-Turbo. [Download](https://huggingface.co/stepfun-ai/stepvideo-t2v-turbo)
21
+ * Feb 17, 2025: 🎉 We have made our technical report available as open source. [Read](https://arxiv.org/abs/2502.10248)
22
+
23
+ ## Video Demos
24
+
25
+ <table border="0" style="width: 100%; text-align: center; margin-top: 1px;">
26
+ <tr>
27
+ <td><video src="https://github.com/user-attachments/assets/9274b351-595d-41fb-aba3-f58e6e91603a" width="100%" controls autoplay loop muted></video></td>
28
+ <td><video src="https://github.com/user-attachments/assets/2f6b3ad5-e93b-436b-98bc-4701182d8652" width="100%" controls autoplay loop muted></video></td>
29
+ <td><video src="https://github.com/user-attachments/assets/67d20ee7-ad78-4b8f-80f6-3fdb00fb52d8" width="100%" controls autoplay loop muted></video></td>
30
+ </tr>
31
+ <tr>
32
+ <td><video src="https://github.com/user-attachments/assets/9abce409-105d-4a8a-ad13-104a98cc8a0b" width="100%" controls autoplay loop muted></video></td>
33
+ <td><video src="https://github.com/user-attachments/assets/8d1e1a47-048a-49ce-85f6-9d013f2d8e89" width="100%" controls autoplay loop muted></video></td>
34
+ <td><video src="https://github.com/user-attachments/assets/32cf4bd1-ec1f-4f77-a488-cd0284aa81bb" width="100%" controls autoplay loop muted></video></td>
35
+ </tr>
36
+ <tr>
37
+ <td><video src="https://github.com/user-attachments/assets/f95a7a49-032a-44ea-a10f-553d4e5d21c6" width="100%" controls autoplay loop muted></video></td>
38
+ <td><video src="https://github.com/user-attachments/assets/3534072e-87d9-4128-a87f-28fcb5d951e0" width="100%" controls autoplay loop muted></video></td>
39
+ <td><video src="https://github.com/user-attachments/assets/6d893dad-556d-4527-a882-666cba3d10e9" width="100%" controls autoplay loop muted></video></td>
40
+ </tr>
41
+
42
+ </table>
43
+
44
+ ## Table of Contents
45
+
46
+ 1. [Introduction](#1-introduction)
47
+ 2. [Model Summary](#2-model-summary)
48
+ 3. [Model Download](#3-model-download)
49
+ 4. [Model Usage](#4-model-usage)
50
+ 5. [Benchmark](#5-benchmark)
51
+ 6. [Online Engine](#6-online-engine)
52
+ 7. [Citation](#7-citation)
53
+ 8. [Acknowledgement](#8-ackownledgement)
54
+
55
+ ## 1. Introduction
56
+ We present **Step-Video-T2V**, a state-of-the-art (SoTA) text-to-video pre-trained model with 30 billion parameters and the capability to generate videos up to 204 frames. To enhance both training and inference efficiency, we propose a deep compression VAE for videos, achieving 16x16 spatial and 8x temporal compression ratios. Direct Preference Optimization (DPO) is applied in the final stage to further enhance the visual quality of the generated videos. Step-Video-T2V's performance is evaluated on a novel video generation benchmark, **Step-Video-T2V-Eval**, demonstrating its SoTA text-to-video quality compared to both open-source and commercial engines.
57
+
58
+ ## 2. Model Summary
59
+ In Step-Video-T2V, videos are represented by a high-compression Video-VAE, achieving 16x16 spatial and 8x temporal compression ratios. User prompts are encoded using two bilingual pre-trained text encoders to handle both English and Chinese. A DiT with 3D full attention is trained using Flow Matching and is employed to denoise input noise into latent frames, with text embeddings and timesteps serving as conditioning factors. To further enhance the visual quality of the generated videos, a video-based DPO approach is applied, which effectively reduces artifacts and ensures smoother, more realistic video outputs.
60
+
61
+ <p align="center">
62
+ <img width="80%" src="assets/model_architecture.png">
63
+ </p>
64
+
65
+ ### 2.1. Video-VAE
66
+ A deep compression Variational Autoencoder (VideoVAE) is designed for video generation tasks, achieving 16x16 spatial and 8x temporal compression ratios while maintaining exceptional video reconstruction quality. This compression not only accelerates training and inference but also aligns with the diffusion process's preference for condensed representations.
67
+
68
+ <p align="center">
69
+ <img width="70%" src="assets/dcvae.png">
70
+ </p>
71
+
72
+ ### 2.2. DiT w/ 3D Full Attention
73
+ Step-Video-T2V is built on the DiT architecture, which has 48 layers, each containing 48 attention heads, with each head’s dimension set to 128. AdaLN-Single is leveraged to incorporate the timestep condition, while QK-Norm in the self-attention mechanism is introduced to ensure training stability. Additionally, 3D RoPE is employed, playing a critical role in handling sequences of varying video lengths and resolutions.
74
+
75
+ <p align="center">
76
+ <img width="80%" src="assets/dit.png">
77
+ </p>
78
+
79
+ ### 2.3. Video-DPO
80
+ In Step-Video-T2V, we incorporate human feedback through Direct Preference Optimization (DPO) to further enhance the visual quality of the generated videos. DPO leverages human preference data to fine-tune the model, ensuring that the generated content aligns more closely with human expectations. The overall DPO pipeline is shown below, highlighting its critical role in improving both the consistency and quality of the video generation process.
81
+
82
+ <p align="center">
83
+ <img width="100%" src="assets/dpo_pipeline.png">
84
+ </p>
85
+
86
+
87
+
88
+ ## 3. Model Download
89
+ | Models | 🤗Huggingface | 🤖Modelscope |
90
+ |:-------:|:-------:|:-------:|
91
+ | Step-Video-T2V | [download](https://huggingface.co/stepfun-ai/stepvideo-t2v) | [download](https://www.modelscope.cn/models/stepfun-ai/stepvideo-t2v)
92
+ | Step-Video-T2V-Turbo (Inference Step Distillation) | [download](https://huggingface.co/stepfun-ai/stepvideo-t2v-turbo) | [download](https://www.modelscope.cn/models/stepfun-ai/stepvideo-t2v-turbo)
93
+
94
+
95
+ ## 4. Model Usage
96
+ ### 📜 4.1 Requirements
97
+
98
+ The following table shows the requirements for running Step-Video-T2V model (batch size = 1, w/o cfg distillation) to generate videos:
99
+
100
+ | Model | height/width/frame | Peak GPU Memory | 50 steps w flash-attn | 50 steps w/o flash-attn |
101
+ |:------------:|:------------:|:------------:|:------------:|:------------:|
102
+ | Step-Video-T2V | 544px992px204f | 77.64 GB | 743 s | 1232 s |
103
+ | Step-Video-T2V | 544px992px136f | 72.48 GB | 408 s | 605 s |
104
+
105
+ * An NVIDIA GPU with CUDA support is required.
106
+ * The model is tested on four GPUs.
107
+ * **Recommended**: We recommend to use GPUs with 80GB of memory for better generation quality.
108
+ * Tested operating system: Linux
109
+ * The self-attention in text-encoder (step_llm) only supports CUDA capabilities sm_80 sm_86 and sm_90
110
+
111
+ ### 🔧 4.2 Dependencies and Installation
112
+ - Python >= 3.10.0 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html))
113
+ - [PyTorch >= 2.3-cu121](https://pytorch.org/)
114
+ - [CUDA Toolkit](https://developer.nvidia.com/cuda-downloads)
115
+ - [FFmpeg](https://www.ffmpeg.org/)
116
+ ```bash
117
+ git clone https://github.com/stepfun-ai/Step-Video-T2V.git
118
+ conda create -n stepvideo python=3.10
119
+ conda activate stepvideo
120
+
121
+ cd Step-Video-T2V
122
+ pip install -e .
123
+ pip install flash-attn --no-build-isolation ## flash-attn is optional
124
+ ```
125
+
126
+ ### 🚀 4.3 Inference Scripts
127
+ - We employed a decoupling strategy for the text encoder, VAE decoding, and DiT to optimize GPU resource utilization by DiT. As a result, a dedicated GPU is needed to handle the API services for the text encoder's embeddings and VAE decoding.
128
+ ```bash
129
+ python api/call_remote_server.py --model_dir where_you_download_dir & ## We assume you have more than 4 GPUs available. This command will return the URL for both the caption API and the VAE API. Please use the returned URL in the following command.
130
+
131
+ parallel=4 # or parallel=8
132
+ url='127.0.0.1'
133
+ model_dir=where_you_download_dir
134
+
135
+ torchrun --nproc_per_node $parallel run_parallel.py --model_dir $model_dir --vae_url $url --caption_url $url --ulysses_degree $parallel --prompt "一名宇航员在月球上发现一块石碑,上面印有“stepfun”字样,闪闪发光" --infer_steps 50 --cfg_scale 9.0 --time_shift 13.0
136
+ ```
137
+
138
+ ### 🚀 4.4 Best-of-Practice Inference settings
139
+ Step-Video-T2V exhibits robust performance in inference settings, consistently generating high-fidelity and dynamic videos. However, our experiments reveal that variations in inference hyperparameters can have a substantial effect on the trade-off between video fidelity and dynamics. To achieve optimal results, we recommend the following best practices for tuning inference parameters:
140
+
141
+ | Models | infer_steps | cfg_scale | time_shift | num_frames |
142
+ |:-------:|:-------:|:-------:|:-------:|:-------:|
143
+ | Step-Video-T2V | 30-50 | 9.0 | 13.0 | 204
144
+ | Step-Video-T2V-Turbo (Inference Step Distillation) | 10-15 | 5.0 | 17.0 | 204 |
145
+
146
+
147
+ ## 5. Benchmark
148
+ We are releasing [Step-Video-T2V Eval](https://github.com/stepfun-ai/Step-Video-T2V/blob/main/benchmark/Step-Video-T2V-Eval) as a new benchmark, featuring 128 Chinese prompts sourced from real users. This benchmark is designed to evaluate the quality of generated videos across 11 distinct categories: Sports, Food, Scenery, Animals, Festivals, Combination Concepts, Surreal, People, 3D Animation, Cinematography, and Style.
149
+
150
+ ## 6. Online Engine
151
+ The online version of Step-Video-T2V is available on [跃问视频](https://yuewen.cn/videos), where you can also explore some impressive examples.
152
+
153
+ ## 7. Citation
154
+ ```
155
+ @misc{ma2025stepvideot2vtechnicalreportpractice,
156
+ title={Step-Video-T2V Technical Report: The Practice, Challenges, and Future of Video Foundation Model},
157
+ author={Guoqing Ma and Haoyang Huang and Kun Yan and Liangyu Chen and Nan Duan and Shengming Yin and Changyi Wan and Ranchen Ming and Xiaoniu Song and Xing Chen and Yu Zhou and Deshan Sun and Deyu Zhou and Jian Zhou and Kaijun Tan and Kang An and Mei Chen and Wei Ji and Qiling Wu and Wen Sun and Xin Han and Yanan Wei and Zheng Ge and Aojie Li and Bin Wang and Bizhu Huang and Bo Wang and Brian Li and Changxing Miao and Chen Xu and Chenfei Wu and Chenguang Yu and Dapeng Shi and Dingyuan Hu and Enle Liu and Gang Yu and Ge Yang and Guanzhe Huang and Gulin Yan and Haiyang Feng and Hao Nie and Haonan Jia and Hanpeng Hu and Hanqi Chen and Haolong Yan and Heng Wang and Hongcheng Guo and Huilin Xiong and Huixin Xiong and Jiahao Gong and Jianchang Wu and Jiaoren Wu and Jie Wu and Jie Yang and Jiashuai Liu and Jiashuo Li and Jingyang Zhang and Junjing Guo and Junzhe Lin and Kaixiang Li and Lei Liu and Lei Xia and Liang Zhao and Liguo Tan and Liwen Huang and Liying Shi and Ming Li and Mingliang Li and Muhua Cheng and Na Wang and Qiaohui Chen and Qinglin He and Qiuyan Liang and Quan Sun and Ran Sun and Rui Wang and Shaoliang Pang and Shiliang Yang and Sitong Liu and Siqi Liu and Shuli Gao and Tiancheng Cao and Tianyu Wang and Weipeng Ming and Wenqing He and Xu Zhao and Xuelin Zhang and Xianfang Zeng and Xiaojia Liu and Xuan Yang and Yaqi Dai and Yanbo Yu and Yang Li and Yineng Deng and Yingming Wang and Yilei Wang and Yuanwei Lu and Yu Chen and Yu Luo and Yuchu Luo and Yuhe Yin and Yuheng Feng and Yuxiang Yang and Zecheng Tang and Zekai Zhang and Zidong Yang and Binxing Jiao and Jiansheng Chen and Jing Li and Shuchang Zhou and Xiangyu Zhang and Xinhao Zhang and Yibo Zhu and Heung-Yeung Shum and Daxin Jiang},
158
+ year={2025},
159
+ eprint={2502.10248},
160
+ archivePrefix={arXiv},
161
+ primaryClass={cs.CV},
162
+ url={https://arxiv.org/abs/2502.10248},
163
+ }
164
+ ```
165
+
166
+ ## 8. Acknowledgement
167
+ - We would like to express our sincere thanks to the [xDiT](https://github.com/xdit-project/xDiT) team for their invaluable support and parallelization strategy.
168
+ - Our code will be integrated into the official repository of [Huggingface/Diffusers](https://github.com/huggingface/diffusers).