Datasets:
language:
- en
- zh
tags:
- robotics
- manipulation
- vla
- trajectory-data
- multimodal
- vision-language-action
license: other
task_categories:
- robotics
- reinforcement-learning
multimodal: vision+language+action
dataset_info:
features:
- name: rgb_images
dtype: image
description: Multi-view RGB images
- name: slam_poses
sequence: float32
description: SLAM pose trajectories
- name: vive_poses
sequence: float32
description: Vive tracking system poses
- name: point_clouds
sequence: float32
description: Time-of-Flight point cloud data
- name: clamp_data
sequence: float32
description: Clamp sensor readings
- name: merged_trajectory
sequence: float32
description: Fused trajectory data
configs:
- config_name: default
data_files: '**/*'
Enterprise-grade Robotic Manipulation Dataset for Universal Manipulation Interface
📖 Overview
FastUMI (Fast Universal Manipulation Interface) is a dataset and interface framework for general-purpose robotic manipulation tasks, designed to support hardware-agnostic, scalable, and efficient data collection and model training.
The project provides:
- Physical prototype systems
- Complete data collection codebase
- Standardized data formats and utilities
- Tools for real-world manipulation learning research
🚀 Features
FastUMI Pro Enhancements
- ✅ Higher precision trajectory data
- ✅ Diverse embodiment support for true "one-brain-multiple-forms"
- ✅ Enterprise-ready pipeline and full-link data processing
FastUMI-150K
- ~150,000 real-world manipulation trajectories
- Used by research partners for large-scale VLA (Vision-Language-Action) model training
- Demonstrated significant multi-task generalization capabilities
📊 Model Performance
VLA Model Results: [TBD]
🛠️ Toolchain
| Tool | Description | Link |
|---|---|---|
| Single-Arm Demo Replay | Single-arm data replay code | GitHub |
| Dual-Arm Demo Replay | Dual-arm data replay code | GitHub |
| Hardware SDK | FastUMI hardware development kit | GitHub |
| Monitor Tool | Real-time device monitoring | GitHub |
| Data Collection | Data collection utilities | GitHub |
Research & Applications
- Paper: MLM: Learning Multi-task Loco-Manipulation Whole-Body Control for Quadruped Robot with Arm
- Tutorial: PI0 (FastUMI Data Lightweight Adaptation, Version V0) Full Pipeline
📥 Data Download
Example Dataset
# Direct download (may be slow in some regions)
huggingface-cli download FastUMIPro/example_data_fastumi_pro_raw --repo-type dataset --local-dir ~/fastumi_data/
Mirror Download (Recommended)
# Set mirror endpoint
export HF_ENDPOINT=https://hf-mirror.com
Download via mirror
huggingface-cli download --repo-type dataset --resume-download FastUMIPro/example_data_fastumi_pro_raw --local-dir ~/fastumi_data/ 📁 Data Structure Each session represents an independent operation "episode" containing observation data and action sequences.
Directory Structure
text
session_001/
└── device_label_xv_serial/
└── session_timestamp/
├── RGB_Images/
│ ├── timestamps.csv
│ └── Frames/
│ ├── frame_000001.jpg
│ └── ...
├── SLAM_Poses/
│ └── slam_raw.txt
├── Vive_Poses/
│ └── vive_data_tum.txt
├── ToF_PointClouds/
│ ├── timestamps.csv
│ └── PointClouds/
│ └── pointcloud_000001.pcd
├── Clamp_Data/
│ └── clamp_data_tum.txt
└── Merged_Trajectory/
├── merged_trajectory.txt
└── merge_stats.txt
Data Specifications
| Data Type | Path | Shape | Type | Description |
|---|---|---|---|---|
| RGB Images | session_XXX/RGB_Images/Video.MP4 |
(frames, 1080, 1920, 3) |
uint8 |
Camera video data, 60 FPS |
| SLAM Poses | session_XXX/SLAM_Poses/slam_raw.txt |
(timestamps, 7) |
float |
UMI end-effector poses |
| Vive Poses | session_XXX/Vive_Poses/vive_data_tum.txt |
(timestamps, 7) |
float |
Vive base station poses |
| ToF PointClouds | session_XXX/PointClouds/pointcloud_...pcd |
pcd format |
pcd | Time-of-Flight point cloud data |
| Clamp Data | session_XXX/Clamp_Data/clamp_data_tum.txt |
(timestamps, 1) |
float |
Gripper spacing (mm) |
| Merged Trajectory | session_XXX/Merged_Trajectory/merged_trajectory.txt |
(timestamps, 7) |
float |
Fused trajectory (Vive/UMI based on velocity) |
Pose Data Format
All pose data (SLAM, Vive, Merged) follow the same format:
| Data | Description |
|---|---|
| Timestamp | Unix timestamp of the trajectory data |
| Pos X | X-coordinate of position (meters) |
| Pos Y | Y-coordinate of position (meters) |
| Pos Z | Z-coordinate of position (meters) |
| Q_X | X-component of orientation quaternion |
| Q_Y | Y-component of orientation quaternion |
| Q_Z | Z-component of orientation quaternion |
| Q_W | W-component of orientation quaternion |
🔄 Data Conversion
[TBD]
🤝 Collaboration
FastUMI Pro dataset is available for research collaboration. The full FastUMI-150K dataset has been provided to partner research teams for large-scale model training.
📞 Contact
☎️ 开发团队联系方式
对于任何问题或建议,请随时联系我们的开发团队。
负责人 (Lead) Ding Yan [email protected] Duke_dingyan