--- task_categories: - question-answering language: - en size_categories: - 1K StreamingBench Banner
🏠 Project Page | πŸ“„ arXiv Paper | πŸ“¦ Dataset | πŸ…Leaderboard
**StreamingBench** evaluates **Multimodal Large Language Models (MLLMs)** in real-time, streaming video understanding tasks. 🌟 ## 🎞️ Overview As MLLMs continue to advance, they remain largely focused on offline video comprehension, where all frames are pre-loaded before making queries. However, this is far from the human ability to process and respond to video streams in real-time, capturing the dynamic nature of multimedia content. To bridge this gap, **StreamingBench** introduces the first comprehensive benchmark for streaming video understanding in MLLMs. ### Key Evaluation Aspects - 🎯 **Real-time Visual Understanding**: Can the model process and respond to visual changes in real-time? - πŸ”Š **Omni-source Understanding**: Does the model integrate visual and audio inputs synchronously in real-time video streams? - 🎬 **Contextual Understanding**: Can the model comprehend the broader context within video streams? ### Dataset Statistics - πŸ“Š **900** diverse videos - πŸ“ **4,500** human-annotated QA pairs - ⏱️ Five questions per video at different timestamps #### 🎬 Video Categories
Video Categories
#### πŸ” Task Taxonomy
Task Taxonomy
## πŸ”¬ Experimental Results ### Performance of Various MLLMs on StreamingBench - All Context
Task Taxonomy
- 60 seconds of context preceding the query time
Task Taxonomy
- Comparison of Main Experiment vs. 60 Seconds of Video Context -
Task Taxonomy
### Performance of Different MLLMs on the Proactive Output Task *"≀ xs" means that the answer is considered correct if the actual output time is within x seconds of the ground truth.*
Task Taxonomy
## πŸ“ Citation ```bibtex @article{lin2024streaming, title={StreamingBench: Assessing the Gap for MLLMs to Achieve Streaming Video Understanding}, author={Junming Lin and Zheng Fang and Chi Chen and Zihao Wan and Fuwen Luo and Peng Li and Yang Liu and Maosong Sun}, journal={arXiv preprint arXiv:2411.03628}, year={2024} } ``` https://arxiv.org/abs/2411.03628