Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
CodeZzz's picture
update
266c787
|
raw
history blame
8.29 kB
---
dataset_info:
- config_name: train
features:
- name: video_path
dtype: string
- name: internal_id
dtype: string
- name: prompt
dtype: string
- name: url
dtype: string
- name: annotation
struct:
- name: alignment
dtype: int64
range: [1,5]
- name: composition
dtype: int64
range: [1,3]
- name: focus
dtype: int64
range: [1,3]
- name: camera movement
dtype: int64
range: [1,3]
- name: color
dtype: int64
range: [1,5]
- name: lighting accurate
dtype: int64
range: [1,4]
- name: lighting aes
dtype: int64
range: [1,5]
- name: shape at beginning
dtype: int64
range: [0,3]
- name: shape throughout
dtype: int64
range: [0,4]
- name: object motion dynamic
dtype: int64
range: [1,5]
- name: camera motion dynamic
dtype: int64
range: [1,5]
- name: movement smoothness
dtype: int64
range: [0,4]
- name: movement reality
dtype: int64
range: [0,4]
- name: clear
dtype: int64
range: [1,5]
- name: image quality stability
dtype: int64
range: [1,5]
- name: camera stability
dtype: int64
range: [1,3]
- name: detail refinement
dtype: int64
range: [1,5]
- name: letters
dtype: int64
range: [1,4]
- name: physics law
dtype: int64
range: [1,5]
- name: unsafe type
dtype: int64
range: [1,5]
- name: safety
dtype: int64
range: [1,5]
- name: meta_result
sequence:
dtype: int64
- name: meta_mask
sequence:
dtype: int64
splits:
- name: train
num_examples: 40743
- config_name: regression
features:
- name: internal_id
dtype: string
- name: prompt
dtype: string
- name: standard_answer
dtype: string
- name: video1_path
dtype: string
- name: video2_path
dtype: string
splits:
- name: regression
num_examples: 1795
- config_name: monetbench
features:
- name: internal_id
dtype: string
- name: prompt
dtype: string
- name: standard_answer
dtype: string
- name: video1_path
dtype: string
- name: video2_path
dtype: string
splits:
- name: monetbench
num_examples: 1000
configs:
- config_name: train
data_files:
- split: train
path: train/*.parquet
- config_name: regression
data_files:
- split: regression
path: regression/*.parquet
- config_name: monetbench
data_files:
- split: monetbench
path: monetbench/*.parquet
license: apache-2.0
---
# VisionRewardDB-Video
This dataset is a comprehensive collection of video evaluation data designed for multi-dimensional quality assessment of AI-generated videos. It encompasses annotations across 21 diverse aspects, including text-to-video consistency, aesthetic quality, motion dynamics, physical realism, and technical specifications. ๐ŸŒŸโœจ
[**Github Repository**](https://github.com/THUDM/VisionReward) ๐Ÿ”—
The dataset is structured to facilitate both model training and standardized evaluation:
- `Train`: A primary training set with detailed multi-dimensional annotations
- `Regression`: A regression set with paired preference data
- `MonetBench`: A benchmark test set for standardized performance evaluation
This holistic approach enables the development and validation of sophisticated video quality assessment models that can evaluate AI-generated videos across multiple critical dimensions, moving beyond simple aesthetic judgments to encompass technical accuracy, semantic consistency, and dynamic performance.
## Annotation Details
Each video in the dataset is annotated with the following attributes:
<table border="1" style="border-collapse: collapse; width: 100%;">
<tr>
<th style="padding: 8px; width: 30%;">Dimension</th>
<th style="padding: 8px; width: 70%;">Attributes</th>
</tr>
<tr>
<td style="padding: 8px;">Alignment</td>
<td style="padding: 8px;">Alignment</td>
</tr>
<tr>
<td style="padding: 8px;">Composition</td>
<td style="padding: 8px;">Composition</td>
</tr>
<tr>
<td style="padding: 8px;">Quality</td>
<td style="padding: 8px;">Color; Lighting Accurate; Lighting Aes; Clear</td>
</tr>
<tr>
<td style="padding: 8px;">Fidelity</td>
<td style="padding: 8px;">Detail Refinement; Movement Reality; Letters</td>
</tr>
<tr>
<td style="padding: 8px;">Safety</td>
<td style="padding: 8px;">Safety</td>
</tr>
<tr>
<td style="padding: 8px;">Stability</td>
<td style="padding: 8px;">Movement Smoothness; Image Quality Stability; Focus; Camera Movement; Camera Stability</td>
</tr>
<tr>
<td style="padding: 8px;">Preservation</td>
<td style="padding: 8px;">Shape at Beginning; Shape throughout</td>
</tr>
<tr>
<td style="padding: 8px;">Dynamic</td>
<td style="padding: 8px;">Object Motion dynamic; Camera Motion dynamic</td>
</tr>
<tr>
<td style="padding: 8px;">Physics</td>
<td style="padding: 8px;">Physics Law</td>
</tr>
</table>
### Example: Camera Stability
- **3:** Very stable
- **2:** Slight shake
- **1:** Heavy shake
- Note: When annotations are missing, the corresponding value will be set to **-1**.
For more detailed annotation guidelines(such as the meanings of different scores and annotation rules), please refer to:
- [annotation_deatils](https://flame-spaghetti-eb9.notion.site/VisioinReward-Video-Annotation-Details-196a0162280e8077b1acef109b3810ff)
- [annotation_deatils_ch](https://flame-spaghetti-eb9.notion.site/VisionReward-Video-196a0162280e80e7806af42fc5808c99)
## Additional Feature Details
The dataset includes two special features: `annotation` and `meta_result`.
### Annotation
The `annotation` feature contains scores across 21 different dimensions of video assessment, with each dimension having its own scoring criteria as detailed above.
### Meta Result
The `meta_result` feature transforms multi-choice questions into a series of binary judgments. For example, for the `Camera Stability` dimension:
| Score | Is the camera very stable? | Is the camera not unstable? |
|-------|--------------------------|---------------------------|
| 3 | 1 | 1 |
| 2 | 0 | 1 |
| 1 | 0 | 0 |
- note: When the corresponding meta_result is -1 (It means missing annotation), the binary judgment should be excluded from consideration
Each element in the binary array represents a yes/no answer to a specific aspect of the assessment. For detailed questions corresponding to these binary judgments, please refer to the meta_qa_en.txt file.
### Meta Mask
The `meta_mask` feature is used for balanced sampling during model training:
- Elements with value 1 indicate that the corresponding binary judgment was used in training
- Elements with value 0 indicate that the corresponding binary judgment was ignored during training
## Data Processing
```bash
cd videos
tar -xvzf train.tar.gz
tar -xvzf regression.tar.gz
tar -xvzf monetbench.tar.gz
```
We provide `extract.py` for processing the `train` dataset into JSONL format. The script can optionally extract the balanced positive/negative QA pairs used in VisionReward training by processing `meta_result` and `meta_mask` fields.
```bash
python extract.py
```
## Citation Information
```
@misc{xu2024visionrewardfinegrainedmultidimensionalhuman,
title={VisionReward: Fine-Grained Multi-Dimensional Human Preference Learning for Image and Video Generation},
author={Jiazheng Xu and Yu Huang and Jiale Cheng and Yuanming Yang and Jiajun Xu and Yuan Wang and Wenbo Duan and Shen Yang and Qunlin Jin and Shurun Li and Jiayan Teng and Zhuoyi Yang and Wendi Zheng and Xiao Liu and Ming Ding and Xiaohan Zhang and Xiaotao Gu and Shiyu Huang and Minlie Huang and Jie Tang and Yuxiao Dong},
year={2024},
eprint={2412.21059},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2412.21059},
}
```