nebulae09 commited on
Commit
53b6cef
·
verified ·
1 Parent(s): 78bfb7c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +123 -3
README.md CHANGED
@@ -1,3 +1,123 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - visual-question-answering
5
+ language:
6
+ - en
7
+ modalities:
8
+ - Video
9
+ - Text
10
+ tags:
11
+ - video understanding
12
+ - evaluation
13
+ - large vision-language model
14
+ size_categories:
15
+ - 1K<n<10K
16
+ ---
17
+ # MMBench-Video: A Long-Form Multi-Shot Benchmark for Holistic Video Understanding
18
+
19
+ - **Homepage:** [https://mmbench-video.github.io/](https://mmbench-video.github.io/)
20
+ - **Repository:** [https://www.modelscope.cn/datasets/opencompass/MMBench-Video](https://www.modelscope.cn/datasets/opencompass/MMBench-Video)
21
+ - **Paper:** [MMBench-Video: A Long-Form Multi-Shot Benchmark for Holistic Video Understanding](https://arxiv.org/abs/2406.14515).
22
+
23
+ ## Table of Contents
24
+
25
+ - [MMBench-Video: A Long-Form Multi-Shot Benchmark for Holistic Video Understanding](#mmbench-video-a-long-form-multi-shot-benchmark-for-holistic-video-understanding)
26
+ - [Table of Contents](#table-of-contents)
27
+ - [Introduction](#introduction)
28
+ - [Leaderboard](#leaderboard)
29
+ - [Data](#data)
30
+ - [How to get video data](#how-to-get-video-data)
31
+ - [Citation](#citation)
32
+ - [License](#license)
33
+
34
+ ## Introduction
35
+
36
+ MMBench-Video is a quantitative benchmark designed to rigorously evaluate LVLMs' proficiency in video understanding.
37
+ MMBench-Video incorporates approximately 600 web videos with rich context from YouTube, spanning 16 major categories, including News, Sports, etc., covering most video topics people watch in their daily lives. Each video ranges in duration from 30 secs to 6 mins, to accommodate the evaluation of video understanding capabilities on longer videos. The benchmark
38
+ includes roughly 2,000 original question-answer (QA) pairs, contributed by volunteers, covering a total of 26 fine-grained capabilities. And it also implement a GPT-4-based evaluation paradigm, which offers superior accuracy, consistency, and a closer alignment with human judgments.
39
+
40
+ ## Leaderboard
41
+
42
+ Latest leaderboard is in our [openvlm_video_leaderboard](https://huggingface.co/spaces/opencompass/openvlm_video_leaderboard).
43
+
44
+ ## Data
45
+
46
+ The dataset includes 1,998 question-answer (QA) pairs, with each QA assessing one or multiple capabilities of a vision-language model. Each question in the dataset is a question-answer questions with groundtruth.
47
+
48
+ Here is a example:
49
+
50
+ ```
51
+ index: 177
52
+ video: DmUgQzu3Z4U
53
+ video_type: Food & Drink
54
+ question: Did the mint-style guy in the video drink his mouthwash?
55
+ answer: Yes, he drank it. This is very strange. Under normal circumstances we are not allowed to drink mouthwash, but this boy may be doing it to attract viewers.
56
+ dimensions: ['Counterfactual Reasoning']
57
+ video_path: ./video/DmUgQzu3Z4U.mp4
58
+ ```
59
+
60
+ ### How to get video data
61
+
62
+ Using this function to unwrap pkl files to get original video data.
63
+
64
+ ```python
65
+ def unwrap_hf_pkl(pth, suffix='.mp4'):
66
+ base_dir = os.path.join(pth, 'video_pkl/')
67
+ target_dir = os.path.join(pth, 'video/')
68
+ pickle_files = [os.path.join(base_dir, file) for file in os.listdir(base_dir)]
69
+ pickle_files.sort()
70
+
71
+ if not os.path.exists(target_dir):
72
+ os.makedirs(target_dir, exist_ok=True)
73
+ for pickle_file in pickle_files:
74
+ with open(pickle_file, 'rb') as file:
75
+ video_data = pickle.load(file)
76
+ # For each video file in the pickle file, write its contents to a new mp4 file
77
+ for video_name, video_content in video_data.items():
78
+ output_path = os.path.join(target_dir, f'{video_name}{suffix}')
79
+ with open(output_path, 'wb') as output_file:
80
+ output_file.write(video_content)
81
+ print('The video file has been restored and stored from the pickle file.')
82
+ else:
83
+ print('The video file already exists.')
84
+ ```
85
+
86
+ For full dataset evaluation, you can use [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) to use MMBench-Video with single command.
87
+
88
+ ```bash
89
+ python run.py --model GPT4o --data MMBench-Video --nframe 8 --verbose
90
+ ```
91
+
92
+ ## Citation
93
+
94
+ ```
95
+ @misc{fang2024mmbenchvideolongformmultishotbenchmark,
96
+ title={MMBench-Video: A Long-Form Multi-Shot Benchmark for Holistic Video Understanding},
97
+ author={Xinyu Fang and Kangrui Mao and Haodong Duan and Xiangyu Zhao and Yining Li and Dahua Lin and Kai Chen},
98
+ year={2024},
99
+ eprint={2406.14515},
100
+ archivePrefix={arXiv},
101
+ primaryClass={cs.CV},
102
+ url={https://arxiv.org/abs/2406.14515},
103
+ }
104
+ ```
105
+
106
+ If you using VLMEvalKit for model evaluation, please cite this:
107
+
108
+ ```
109
+ @misc{duan2024vlmevalkitopensourcetoolkitevaluating,
110
+ title={VLMEvalKit: An Open-Source Toolkit for Evaluating Large Multi-Modality Models},
111
+ author={Haodong Duan and Junming Yang and Yuxuan Qiao and Xinyu Fang and Lin Chen and Yuan Liu and Amit Agarwal and Zhe Chen and Mo Li and Yubo Ma and Hailong Sun and Xiangyu Zhao and Junbo Cui and Xiaoyi Dong and Yuhang Zang and Pan Zhang and Jiaqi Wang and Dahua Lin and Kai Chen},
112
+ year={2024},
113
+ eprint={2407.11691},
114
+ archivePrefix={arXiv},
115
+ primaryClass={cs.CV},
116
+ url={https://arxiv.org/abs/2407.11691},
117
+ }
118
+ ```
119
+
120
+ ## License
121
+
122
+ The MMBench-Video dataset is licensed under a
123
+ [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/).