Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,8 @@ license: apache-2.0
|
|
21 |
|
22 |
|
23 |
## Model Overview
|
24 |
-
- This model is jointly finetuned with [DMD](https://arxiv.org/pdf/2405.14867) and [VSA](https://arxiv.org/pdf/2505.13389), based on [Wan-AI/Wan2.1-T2V-1.3B-Diffusers](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B-Diffusers).
|
|
|
25 |
- It supports 3-step inference and achieves up to **20 FPS** on a single **H100** GPU.
|
26 |
- Both [finetuning](https://github.com/hao-ai-lab/FastVideo/blob/main/scripts/distill/v1_distill_dmd_wan_VSA.sh) and [inference](https://github.com/hao-ai-lab/FastVideo/blob/main/scripts/inference/v1_inference_wan_dmd.sh) scripts are available in the [FastVideo](https://github.com/hao-ai-lab/FastVideo) repository.
|
27 |
- Try it out on **FastVideo** — we support a wide range of GPUs from **H100** to **4090**, and even support **Mac** users!
|
|
|
21 |
|
22 |
|
23 |
## Model Overview
|
24 |
+
- This model is jointly finetuned with [DMD](https://arxiv.org/pdf/2405.14867) and [VSA](https://arxiv.org/pdf/2505.13389), based on [Wan-AI/Wan2.1-T2V-1.3B-Diffusers](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B-Diffusers).
|
25 |
+
- It was trained on 8 nodes with 64 H200 GPUs in total, using a batch size of 64. The example slurm script can be found [here](https://github.com/hao-ai-lab/FastVideo/blob/main/examples/distill/Wan-Syn-480P/distill_dmd_VSA_t2v.slurm)
|
26 |
- It supports 3-step inference and achieves up to **20 FPS** on a single **H100** GPU.
|
27 |
- Both [finetuning](https://github.com/hao-ai-lab/FastVideo/blob/main/scripts/distill/v1_distill_dmd_wan_VSA.sh) and [inference](https://github.com/hao-ai-lab/FastVideo/blob/main/scripts/inference/v1_inference_wan_dmd.sh) scripts are available in the [FastVideo](https://github.com/hao-ai-lab/FastVideo) repository.
|
28 |
- Try it out on **FastVideo** — we support a wide range of GPUs from **H100** to **4090**, and even support **Mac** users!
|