video
video |
|---|
XR-1-Dataset-Sample
[Project Page] [Paper] [GitHub]
This repository contains a representative sample of the XR-1 project's multi-modal dataset. The data is organized to support cross-embodiment training for Humanoids, Manipulators, and Ego-centric vision.
📂 Directory Structure
The dataset follows a hierarchy based on Embodiment -> Task -> Format:
1. Robot Embodiment Data (LeRobot Format)
Standard robot data (like TienKung or UR5) is organized following the LeRobot convention:
XR-1-Dataset-Sample/
└── DUAL_ARM_TIEN_KUNG2/ # Robot Embodiment
└── Press_Green_Button/ # Task Name
└── lerobot/ # Data in LeRobot format
├── metadata.json
├── episodes.jsonl
├── videos/
└── data/
2. Human/Ego-centric Data (Ego4D Format)
For ego-centric data (e.g., Ego4D subsets used for Stage 1 UVMC pre-training), the structure is adapted to its native recording format:
XR-1-Dataset-Sample/
└── Ego4D/ # Human ego-centric source
├── files.json # Unified annotation/mapping file
└── files/ # Raw data storage
└── [video_id].mp4 # Egocentric video clips
🤖 Data Modalities
- Vision: High-frequency RGB streams from multiple camera perspectives.
- Motion: Continuous state-action pairs, which are tokenized into UVMC (Unified Vision-Motion Codes) for XR-1 training.
- Language: Natural language instructions paired with each episode for VLA alignment.
🛠 Usage
This sample is intended for use with the XR-1 GitHub Repository.
📝 Citation
@article{fan2025xr,
title={XR-1: Towards Versatile Vision-Language-Action Models via Learning Unified Vision-Motion Representations},
author={Fan, Shichao and others},
journal={arXiv preprint arXiv:2411.02776},
year={2025}
}
📜 License
This dataset is released under the MIT License.
Contact: For questions, please open an issue on our GitHub.
Discussions
If you're interested in XR-1, welcome to join our WeChat group for discussions.

- Downloads last month
- 803