|
---
|
|
license: cc-by-4.0
|
|
---
|
|
|
|
# V2M Dataset: A Large-Scale Video-to-Music Dataset 🎶
|
|
|
|
**The V2M dataset is proposed in the [VidMuse project](https://vidmuse.github.io/), aimed at advancing research in video-to-music generation.**
|
|
|
|
## ✨ Dataset Overview
|
|
|
|
The V2M dataset comprises 360K pairs of videos and music, covering various types including movie trailers, advertisements, and documentaries. This dataset provides researchers with a rich resource to explore the relationship between video content and music generation.
|
|
|
|
|
|
## 🛠️ Usage Instructions
|
|
|
|
- Download the dataset:
|
|
|
|
```bash
|
|
git clone https://huggingface.co/datasets/Zeyue7/V2M
|
|
```
|
|
|
|
- Dataset structure:
|
|
|
|
```
|
|
V2M/
|
|
├── V2M.txt
|
|
├── V2M-20k.txt
|
|
└── V2M-bench.txt
|
|
```
|
|
|
|
## 🎯 Citation
|
|
|
|
If you use the V2M dataset in your research, please consider citing:
|
|
|
|
```
|
|
@article{tian2024vidmuse,
|
|
title={Vidmuse: A simple video-to-music generation framework with long-short-term modeling},
|
|
author={Tian, Zeyue and Liu, Zhaoyang and Yuan, Ruibin and Pan, Jiahao and Liu, Qifeng and Tan, Xu and Chen, Qifeng and Xue, Wei and Guo, Yike},
|
|
journal={arXiv preprint arXiv:2406.04321},
|
|
year={2024}
|
|
}
|
|
```
|
|
|