language:
- en
- zh
tags:
- robotics
- manipulation
- vla
- trajectory-data
- multimodal
- vision-language-action
license: other
task_categories:
- robotics
- reinforcement-learning
- computer-vision
multimodal: vision+language+action
dataset_info:
features:
- name: rgb_images
dtype: image
description: Multi-view RGB images
- name: slam_poses
sequence: float32
description: SLAM pose trajectories
- name: vive_poses
sequence: float32
description: Vive tracking system poses
- name: point_clouds
sequence: float32
description: Time-of-Flight point cloud data
- name: clamp_data
sequence: float32
description: Clamp sensor readings
- name: merged_trajectory
sequence: float32
description: Fused trajectory data
configs:
- config_name: default
data_files: '**/*'
Fast-UMI: A Scalable and Hardware-Independent Universal Manipulation Interface
Welcome to the official repository of FastUMI Pro!
Project Page | Hugging Face Dataset | PDF (Early Version) | PDF (TBA)
Physical prototypes of the Fast-UMI system
📋 Contents
| Section | Description |
|---|---|
| 🎯 Project Description | Overview and introduction |
| 📊 Dataset Overview | Key features and capabilities |
| 🚀 Quick Start | Get started quickly |
| 📁 Dataset Structure | Data organization and format |
| ⚙️ Data Specifications | Technical details and attributes |
| 🔄 Data Conversion | Format conversion tools |
| 📰 News | Latest updates |
| 📄 License | Usage terms |
| 📞 Contact | Get in touch |
🎯 Project Description
FastUMI Pro is the upgraded enterprise version of FastUMI, designed for streamlined, end-to-end data acquisition and transformation systems for corporate users.
FastUMI (Fast Universal Manipulation Interface) is a dataset and interface framework for universal robot manipulation tasks, supporting hardware-agnostic, scalable, and efficient data collection and model training. The project provides physical prototype systems, complete data collection code, standardized data formats, and utility tools to facilitate real-world manipulation learning research.
📊 Dataset Overview
FastUMI Pro builds upon FastUMI with enhanced features:
- Higher precision trajectory data
- Support for more diverse robot embodiments, truly enabling "one-brain-multi-form" applications
- Comprehensive data leadership in the field
The original FastUMI open-sourced FastUMI-150K containing approximately 150,000 real-world manipulation trajectories, which was first provided to selected research partners for training large-scale VLA (Vision-Language-Action) models.
🚀 Quick Start
Download Example Data
# Original command (may be slow in some regions)
huggingface-cli download FastUMIPro/example_data_fastumi_pro_raw --repo-type dataset --local-dir ~/fastumi_data/
# Mirror acceleration solution
export HF_ENDPOINT=https://hf-mirror.com
huggingface-cli download --repo-type dataset --resume-download FastUMIPro/example_data_fastumi_pro_raw --local-dir ~/fastumi_data/
📁 Dataset Structure
FastUMI PRO uses raw format containing various types of raw sensor data, which can be easily converted to other formats. The raw format facilitates querying and validating original sensor outputs for rapid problem identification.
DATA/
└── device_label_xv_serial/
└── session_timestamp/
├── RGB_Images/
│ ├── timestamps.csv
│ └── Frames/
│ ├── frame_000001.jpg
│ ├── frame_000002.jpg
│ └── ...
├── SLAM_Poses/
│ └── slam_raw.txt
├── Vive_Poses/
│ └── vive_data_tum.txt
├── ToF_PointClouds/
│ ├── timestamps.csv
│ └── PointClouds/
│ ├── pointcloud_000001.pcd
│ ├── pointcloud_000002.pcd
│ └── ...
├── Clamp_Data/
│ └── clamp_data_tum.txt
└── Merged_Trajectory/
├── merged_trajectory.txt
└── merge_stats.txt
Directory Descriptions
session_xxx: Individual data collection sessionRGB_Images: Frame images supporting multiple viewpoints; supports both Images and VideosSLAM_Poses: UMI pose dataVive_Poses: Vive tracking system pose dataToF_PointClouds: Time-of-Flight point cloud raw data (depth)Merged_Trajectory: Trajectory data
⚙️ Data Specifications
Attributes
sim:False: Real environment dataTrue: Simulation data
Observations
observations/images/: Camera image data- Default camera name:
front - Shape:
(frames, 1920, 1080, 3) - Data type:
uint8 - Compression:
gzip(level 4)
- Default camera name:
observations/qpos:- Type: Floating point dataset
- Shape:
(timesteps, 7) - Meaning: Robot end-effector position + quaternion orientation
- Order:
[Pos X, Pos Y, Pos Z, Q_X, Q_Y, Q_Z, Q_W]
Actions
- Type: Floating point dataset
- Shape:
(timesteps, 7) - Meaning: Actions (same structure as qpos, typically mirroring qpos)
🔄 Data Conversion
Supports one-click export to specific formats via web toolchain, or conversion between formats using tools like:
- Any4lerobot: GitHub - Tavish9/any4lerobot
Conversion paths supported:
- hdf5 → lerobot v3.0
- hdf5 → lerobot(Pi0) v2.0
- hdf5 → rlds
📰 News
- [2024-12] We released Data Collection Code and Dataset.
- [2024-11] FastUMI Pro enterprise version announced.
- [2024-10] Initial FastUMI-150K dataset released to research partners.
📄 License
[License information to be added]
📞 Contact
For any questions or suggestions, please contact the development team:
- Lead: [Name]
- Email: [Email Address]
- WeChat: [WeChat ID]
FastUMI Pro - Advancing Robot Manipulation Through Scalable Data Systems