Embodied Referring Expression Comprehension in Human-Robot Interaction
Abstract
A large-scale dataset and multimodal model improve embodied interaction comprehension in robots by addressing perspective bias and enhancing multimodal signal integration.
As robots enter human workspaces, there is a crucial need for them to comprehend embodied human instructions, enabling intuitive and fluent human-robot interaction (HRI). However, accurate comprehension is challenging due to a lack of large-scale datasets that capture natural embodied interactions in diverse HRI settings. Existing datasets suffer from perspective bias, single-view collection, inadequate coverage of nonverbal gestures, and a predominant focus on indoor environments. To address these issues, we present the Refer360 dataset, a large-scale dataset of embodied verbal and nonverbal interactions collected across diverse viewpoints in both indoor and outdoor settings. Additionally, we introduce MuRes, a multimodal guided residual module designed to improve embodied referring expression comprehension. MuRes acts as an information bottleneck, extracting salient modality-specific signals and reinforcing them into pre-trained representations to form complementary features for downstream tasks. We conduct extensive experiments on four HRI datasets, including the Refer360 dataset, and demonstrate that current multimodal models fail to capture embodied interactions comprehensively; however, augmenting them with MuRes consistently improves performance. These findings establish Refer360 as a valuable benchmark and exhibit the potential of guided residual learning to advance embodied referring expression comprehension in robots operating within human environments.
Community
The paper introduces Refer360, a comprehensive multimodal dataset for embodied referring expression comprehension in human-robot interaction (HRI), and proposes MuRes, a lightweight guided residual module that selectively reinforces modality-specific features to improve multimodal grounding performance in real-world scenarios.
➡️ 𝐊𝐞𝐲 𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬 𝐨𝐟 𝐭𝐡𝐞 𝐑𝐞𝐟𝐞𝐫𝟑𝟔𝟎 𝐁𝐞𝐧𝐜𝐡𝐦𝐚𝐫𝐤 + 𝐌𝐮𝐑𝐞𝐬 𝐌𝐨𝐝𝐮𝐥𝐞:
🧠 𝑹𝒆𝒇𝒆𝒓𝟑𝟔𝟎: 𝑭𝒊𝒓𝒔𝒕 𝑬𝒎𝒃𝒐𝒅𝒊𝒆𝒅 𝑹𝑬 𝑫𝒂𝒕𝒂𝒔𝒆𝒕 𝒘𝒊𝒕𝒉 𝑴𝒖𝒍𝒕𝒊-𝑽𝒊𝒆𝒘, 𝑴𝒖𝒍𝒕𝒊-𝑺𝒆𝒏𝒔𝒐𝒓 𝑴𝒐𝒅𝒂𝒍𝒊𝒕𝒊𝒆𝒔: Introduces a dataset with synchronized egocentric and exocentric views, RGB, depth, infrared, 3D skeleton, eye gaze, and audio, across indoor and outdoor environments. With 13,990 annotated interactions (3.2M frames), it overcomes biases in existing datasets (e.g., single view, indoor-only, no gesture/gaze integration).
🔁 𝑴𝒖𝑹𝒆𝒔: 𝑮𝒖𝒊𝒅𝒆𝒅 𝑹𝒆𝒔𝒊𝒅𝒖𝒂𝒍 𝑩𝒐𝒕𝒕𝒍𝒆𝒏𝒆𝒄𝒌 𝒇𝒐𝒓 𝑴𝒖𝒍𝒕𝒊𝒎𝒐𝒅𝒂𝒍 𝑭𝒖𝒔𝒊𝒐𝒏: Proposes a novel residual architecture that uses cross-attention to guide modality-specific signals (visual/language) through an information bottleneck, preventing feature dilution during fusion and outperforming both vanilla residuals and attention-only fusion across 4 datasets.
📈 𝑺𝒊𝒈𝒏𝒊𝒇𝒊𝒄𝒂𝒏𝒕 𝑮𝒂𝒊𝒏𝒔 𝒂𝒄𝒓𝒐𝒔𝒔 𝑯𝑹𝑰 𝒂𝒏𝒅 𝑽𝑸𝑨 𝑻𝒂𝒔𝒌𝒔: On Refer360, integrating MuRes into CLIP improved IOU-25 by +3.4%, and on CAESAR-PRO by +4.99%. For broader VQA tasks like ScienceQA and A-OKVQA, MuRes boosted model accuracy by up to +30%, highlighting its generalization ability across task domains.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Multimodal Spatial Reasoning in the Large Model Era: A Survey and Benchmarks (2025)
- Gestura: A LVLM-Powered System Bridging Motion and Semantics for Real-Time Free-Form Gesture Understanding (2025)
- Multi-speaker Attention Alignment for Multimodal Social Interaction (2025)
- RoboAfford++: A Generative AI-Enhanced Dataset for Multimodal Affordance Learning in Robotic Manipulation and Navigation (2025)
- Grounding Foundational Vision Models with 3D Human Poses for Robust Action Recognition (2025)
- Seeing Across Views: Benchmarking Spatial Reasoning of Vision-Language Models in Robotic Scenes (2025)
- Actial: Activate Spatial Reasoning Ability of Multimodal Large Language Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper