You need to agree to share your contact information to access this dataset
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
Explicit consent is given from Intelligent Interaction Group for the academic research. The rights to the annotation of MER dataset belong to Intelligent Interaction Group. No legal claims of any kind can be derived from accepting and using the database. Intelligent Interaction Group is not liable for any damage resulting from receiving, or using the database or any other files provided by Intelligent Interaction Group. The licensee is not permitted to hand over the database or any other files containing information derived from it (such as labelling files) to third parties, nor may they modify the database without obtaining expressed written consent from Intelligent Interaction Group.
Log in or Sign Up to review the conditions and access this dataset content.
🧠 MER2023: Multimodal Emotion Recognition Challenge
🎯 Introduction
Multimodal emotion recognition has become a vital research area due to its widespread applications in human-computer interaction. With the rise of deep learning, the field has made significant progress over recent decades. However, several challenges still hinder its deployment in real-world scenarios:
- 🧪 Labeling is costly: Annotating large-scale datasets is labor-intensive and expensive.
- 📶 Modality degradation: In real environments, background noise, poor lighting, or network-induced blur can severely degrade input modalities.
To tackle these challenges and promote robust, scalable research, we organized the MER 2023 Challenge. This competition encourages the development of innovative and practical multimodal emotion recognition technologies.
🏁 Tracks
🔹 Track 1: Multi-label Learning (MER-MULTI)
Predict both discrete and dimensional emotions from multimodal inputs. We encourage methods that model multi-label relationships.
🔗 Related work: Wang et al., 2022 [1]
🔹 Track 2: Modality Robustness (MER-NOISE)
Evaluate your system’s robustness to corrupted inputs, including noisy audio and blurred visuals.
🔗 Related work: Hazarika et al., 2022 [2]; Zhang et al., 2022 [3]; Lian et al., 2023 [4]
🔹 Track 3: Semi-Supervised Learning (MER-SEMI)
Leverage unlabeled video samples with semi-supervised methods such as masked autoencoders.
🔗 Related work: He et al., 2022 [5]
📦 Dataset Overview
The MER2023 dataset extends the CHEVAD dataset and introduces automatic unlabeled data collection and refined sample filtering for better reliability.
- Reliable samples are split into
Train&Val
,MER-MULTI
, andMER-NOISE
. - Unreliable & unlabeled samples form the
MER-SEMI
set.
📊 Dataset Statistics
Partition | # Labeled Samples | # Unlabeled Samples | Duration |
---|---|---|---|
Train&Val | 3373 | 0 | 03:45:47 |
MER-MULTI | 411 | 0 | 00:28:09 |
MER-NOISE | 412 | 0 | 00:26:23 |
MER-SEMI | 834 | 73148 | 67:41:24 |
🗓️ Schedule
- 📂 April 30, 2023 – Data & baseline released
- 📦 July 1, 2023 – Evaluation datasets released
- 🧪 July 6, 2023 – Results submission deadline
- 📝 July 14, 2023 – Paper submission deadline
- ✅ July 30, 2023 – Notification of acceptance
- 🖋️ August 6, 2023 – Camera-ready submission
🕛 All deadlines follow 23:59 Anywhere on Earth (AoE).
📚 References
[1] Wang et al. (2022). Multi-label GCN for dynamic facial expression recognition.
[2] Hazarika et al. (2022). Modality robustness in sentiment analysis.
[3] Zhang et al. (2022). Deep Partial Multi-view Learning. IEEE TPAMI.
[4] Lian et al. (2023). Graph Completion Network. IEEE TPAMI.
[5] He et al. (2022). Masked Autoencoders. CVPR.
🔐 Decryption Password (Visible After Approval)
⚠️ The dataset files are compressed and protected with a password.
After your access request has been approved, the password will be provided in the file:README_AFTER_APPROVAL.md
This file also contains an alternative Baidu Netdisk download link for your convenience.
📫 Contact
For questions or collaboration, feel free to reach out to the organizers via email or raise an issue in this repository.
© 2023 Intelligent Interaction Group. All rights reserved.
Licensed under CC BY-NC 4.0
- Downloads last month
- 672