The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
__key__
string | __url__
string | mp4
unknown |
---|---|---|
0000b624-2a3b-4fba-8b9d-3c32cdf868d7/video/head_right_fisheye | "hf://datasets/agibot-world/AgiBotDigitalWorld@a3a9257aabb0ebbeec581802be79f421b0b7edf6/observations(...TRUNCATED) | "AAAAHGZ0eXBpc29tAAACAGlzb21pc28ybXA0MQAAAAhmcmVlADulQW1kYXQAAJ6gKAGvLGDSWxemX/1Y07x220yaB6AcDKKw69m(...TRUNCATED) |
0000b624-2a3b-4fba-8b9d-3c32cdf868d7/video/head_left_fisheye | "hf://datasets/agibot-world/AgiBotDigitalWorld@a3a9257aabb0ebbeec581802be79f421b0b7edf6/observations(...TRUNCATED) | "AAAAHGZ0eXBpc29tAAACAGlzb21pc28ybXA0MQAAAAhmcmVlADo9aG1kYXQAAKFxKAGvLGDUC5PBVng6wNOsIVlY6JQnqLJw69m(...TRUNCATED) |
0000b624-2a3b-4fba-8b9d-3c32cdf868d7/video/head | "hf://datasets/agibot-world/AgiBotDigitalWorld@a3a9257aabb0ebbeec581802be79f421b0b7edf6/observations(...TRUNCATED) | "AAAAHGZ0eXBpc29tAAACAGlzb21pc28ybXA0MQAAAAhmcmVlABe5nW1kYXQAAFW9KAGvLEDWnM5t84rRMXpBzFWBJuLLeALb96P(...TRUNCATED) |
0000b624-2a3b-4fba-8b9d-3c32cdf868d7/video/head_front_fisheye | "hf://datasets/agibot-world/AgiBotDigitalWorld@a3a9257aabb0ebbeec581802be79f421b0b7edf6/observations(...TRUNCATED) | "AAAAHGZ0eXBpc29tAAACAGlzb21pc28ybXA0MQAAAAhmcmVlAEh/kG1kYXQAAK1VKAGvLGDRwhR2yzuItTtapv1Id+S2sNGQ69m(...TRUNCATED) |
0000b624-2a3b-4fba-8b9d-3c32cdf868d7/video/back_left_fisheye | "hf://datasets/agibot-world/AgiBotDigitalWorld@a3a9257aabb0ebbeec581802be79f421b0b7edf6/observations(...TRUNCATED) | "AAAAHGZ0eXBpc29tAAACAGlzb21pc28ybXA0MQAAAAhmcmVlAB6sUW1kYXQAANqoKAGvLGDTz6MhaM0zvTeU8F+vaehbTiGQ69m(...TRUNCATED) |
0000b624-2a3b-4fba-8b9d-3c32cdf868d7/video/back_right_fisheye | "hf://datasets/agibot-world/AgiBotDigitalWorld@a3a9257aabb0ebbeec581802be79f421b0b7edf6/observations(...TRUNCATED) | "AAAAHGZ0eXBpc29tAAACAGlzb21pc28ybXA0MQAAAAhmcmVlAB7vZW1kYXQAAN7qKAGvLGDTfSSFfa0at6eHcce52ephxldw69m(...TRUNCATED) |
0000b624-2a3b-4fba-8b9d-3c32cdf868d7/video/hand_right | "hf://datasets/agibot-world/AgiBotDigitalWorld@a3a9257aabb0ebbeec581802be79f421b0b7edf6/observations(...TRUNCATED) | "AAAAHGZ0eXBpc29tAAACAGlzb21pc28ybXA0MQAAAAhmcmVlADAGL21kYXQAAEvxKAGvLEDF4/n7SZh6kct4PYDrAfJRvUrktp3(...TRUNCATED) |
0000b624-2a3b-4fba-8b9d-3c32cdf868d7/video/hand_left | "hf://datasets/agibot-world/AgiBotDigitalWorld@a3a9257aabb0ebbeec581802be79f421b0b7edf6/observations(...TRUNCATED) | "AAAAHGZ0eXBpc29tAAACAGlzb21pc28ybXA0MQAAAAhmcmVlABgWKm1kYXQAADqVKAGvLEDFjGuXOXCLU7RmQYCBekICwlZukHA(...TRUNCATED) |
0000b624-2a3b-4fba-8b9d-3c32cdf868d7/depth/98/head | "hf://datasets/agibot-world/AgiBotDigitalWorld@a3a9257aabb0ebbeec581802be79f421b0b7edf6/observations(...TRUNCATED) | null |
0000b624-2a3b-4fba-8b9d-3c32cdf868d7/depth/98/hand_left | "hf://datasets/agibot-world/AgiBotDigitalWorld@a3a9257aabb0ebbeec581802be79f421b0b7edf6/observations(...TRUNCATED) | null |
Key Features π
- 1 million+ steps of data are perfectly aligned with real tasks and scenarios.
- 180+ types of objects across 9 target category.
- 9 common materials in daily life.
- 12 core operational skills sourced from real machine body operation tasks.
- Cutting-edge hardware: visual tactile sensors / 6-DoF dexterous hand / mobile dual-arm robots
- Tasks involving:
- Single-skill manipulation
- Long-horizon planning
News π
[2025/2/24]
AgiBot Digital World released on Huggingface. Download Link
TODO List π
- AgiBot Digital World Beta: More high quality simulation data composed of single skill data and multi skill data (expected release date: Q2 2025)
- AgiBot Digital World FEEDBACK:Data quality feedback and improvement.
Get started π₯
Download the Dataset
To download the full dataset, you can use the following code. If you encounter any issues, please refer to the official Hugging Face documentation.
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
# When prompted for a password, use an access token with write permissions.
# Generate one from your settings: https://huggingface.co/settings/tokens
git clone https://huggingface.co/datasets/agibot-world/AgiBotDigitalWorld/tree/main
# If you want to clone without large files - just their pointers
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/agibot-world/AgiBotDigitalWorld/tree/main
If you only want to download a specific task, such as digitaltwin_5
, you can use the following code.
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
# Initialize an empty Git repository
git init AgiBotDigitalWorld
cd AgiBotDigitalWorld
# Set the remote repository
git remote add origin https://huggingface.co/datasets/agibot-world/AgiBotDigitalWorld/tree/main
# Enable sparse-checkout
git sparse-checkout init
# Specify the folders and files
git sparse-checkout set observations/digitaltwin_5 task_info/digitaltwin_5.json scripts proprio_stats parameters
# Pull the data
git pull origin main
To facilitate the inspection of the dataset's internal structure and examples, we also provide a sample dataset. Please refer to sample_dataset.tar
.
Dataset Preprocessing
Our project relies solely on the lerobot library (dataset v2.0
), please follow their installation instructions.
Here, we provide scripts for converting it to the lerobot format.
Requeir ffmpeg>=7.1(You can install with conda install -y -c conda -forge ffmpeg)
export SVT_LOG=0
python convert_to_lerobot.py --src_path /DATASET_FOLDER --tgt_path SAVE_FOLDER --task_template TASK --preprocess_video
## Example
# python convert_to_lerobot.py --data_dir final_format_data --save_dir ./output --repo_id=agibot/agibotdigital --preprocess_video
Visulization
python visualize_dataset.py --task-id=TASK --episode-index=0 --dataset-path=SAVE_FOLDER
## Example
# python visualize_dataset.py --task-id='pick_toy_into_box' --episode-index=0 --dataset-path=output
We would like to express our gratitude to the developers of lerobot for their outstanding contributions to the open-source community.
Dataset Structure
Folder hierarchy
data
βββ observations
β βββ digitaltwin_0 # This represents the task id.
β β βββ 9b21cf2e-829f-4aad-9b61-9edc5b947163 # This represents the episode uuid.
β β β βββ depth # This is a folder containing depth information saved in PNG format.
β β β βββ video # This is a folder containing videos from all camera perspectives.
β β βββ 131e407a-b828-4937-a554-e6706cbc5e2f
β β β βββ ...
β β βββ ...
β βββ digitaltwin_1
β β βββ 95808182-501f-4dca-984b-7404df844d31
β β β βββ depth
β β β βββ video
β β βββ edb0774b-13bb-4a8b-8bb0-71e82fe3ff6a
β β β βββ ...
β βββ ...
βββ states
β βββ digitaltwin_0 # This represents the task id.
β β βββ 9b21cf2e-829f-4aad-9b61-9edc5b947163 # This represents the episode uuid.
β β β βββ task_info.json # This represents the task information.
β β β βββ proprio_states.h5 # This file contains all the robot's proprioceptive information.
β β β βββ camera_parameter.json # This contains all the cameras' intrinsic and extrinsic parameters.
β β βββ 131e407a-b828-4937-a554-e6706cbc5e2f
β β β βββ task_info.json
β β β βββ proprio_states.h5
β β β βββ camera_parameter.json
β β βββ ...
β βββ digitaltwin_1
β βββ 95808182-501f-4dca-984b-7404df844d31
β β β βββ task_info.json
β β β βββ proprio_states.h5
β β β βββ camera_parameter.json
β βββ edb0774b-13bb-4a8b-8bb0-71e82fe3ff6a
β β β βββ task_info.json
β β β βββ proprio_states.h5
β β β βββ camera_parameter.json
| βββ ...
json file format
In the task_[id].json
file, we store the basic information of every episode along with the language instructions. Here, we will further explain several specific keywords.
- action_config: The content corresponding to this key is a list composed of all action slices from the episode. Each action slice includes a start and end time, the corresponding atomic skill, and the language instruction.
- key_frame: The content corresponding to this key consists of annotations for keyframes, including the start and end times of the keyframes and detailed descriptions.
{
"episode_id": "9b21cf2e-829f-4aad-9b61-9edc5b947163",
"task_id": "digitaltwin_5",
"task_name": "pick_toys_into_box",
"init_scene_text": "",
"label_info": {
"objects": {
"extra_objects": [
{
"object_id": "omni6DPose_book_000",
"workspace_id": "book_table_extra"
}
],
"task_related_objects": [
{
"object_id": "omni6DPose_toy_motorcycle_023",
"workspace_id": "book_table_dual_left"
},
{
"object_id": "omni6DPose_toy_truck_030",
"workspace_id": "book_table_dual_right"
},
{
"object_id": "genie_storage_box_002",
"workspace_id": "book_table_dual_middle"
}
]
},
"action_config": [
{
"start_frame": 0,
"end_frame": 178,
"action_text": "",
"skill": "Pick",
"active_object": "gripper",
"passive_object": "omni6DPose_toy_motorcycle_023"
},
{
"start_frame": 179,
"end_frame": 284,
"action_text": "",
"skill": "Place",
"active_object": "omni6DPose_toy_motorcycle_023",
"passive_object": "genie_storage_box_002"
},
{
"start_frame": 285,
"end_frame": 430,
"action_text": "",
"skill": "Pick",
"active_object": "gripper",
"passive_object": "omni6DPose_toy_truck_030"
},
{
"start_frame": 431,
"end_frame": 536,
"action_text": "",
"skill": "Place",
"active_object": "omni6DPose_toy_truck_030",
"passive_object": "genie_storage_box_002"
}
],
"key_frame": []
}
}
h5 file format
In the proprio_stats.h5
file, we store all the robot's proprioceptive data. For more detailed information, please refer to the explanation of proprioceptive state.
|-- timestamp
|-- state
|-- effector
|-- force
|-- index
|-- position
|-- end
|-- angular
|-- orientation
|-- position
|-- velocity
|-- wrench
|-- joint
|-- current_value
|-- effort
|-- position
|-- velocity
|-- robot
|-- orientation
|-- orientation_drift
|-- position
|-- position_drift
|-- action
|-- effector
|-- force
|-- index
|-- position
|-- end
|-- angular
|-- orientation
|-- position
|-- velocity
|-- wrench
|-- joint
|-- effort
|-- index
|-- position
|-- velocity
|-- robot
|-- index
|-- orientation
|-- position
|-- velocity
Explanation of Proprioceptive State
Terminology
The definitions and data ranges in this section may change with software and hardware version. Stay tuned.
State and action
- State State refers to the monitoring information of different sensors and actuators.
- Action Action refers to the instructions sent to the hardware abstraction layer, where controller would respond to these instructions. Therefore, there is a difference between the issued instructions and the actual executed state.
Actuators
- Effector: refers to the end effector, for example dexterous hands or grippers.
- End: refers to the 6DoF end pose on the robot flange.
- Joint: refers to the joints of the robot, with 34 degrees of freedom (2 DoF head, 2 Dof waist, 7 DoF each arm, 8 Dof each gripper).
- Robot: refers to the robot's pose in its surrouding environment. The orientation and position refer to the robot's relative pose in the odometry coordinate syste
Common fields
- Position: Spatial position, encoder position, angle, etc.
- Velocity: Speed
- Angular: Angular velocity
- Effort: Torque of the motor. Not available for now.
- Wrench: Six-dimensional force, force in the xyz directions, and torque. Not available for now.
Value shapes and ranges
Group | Shape | Meaning |
---|---|---|
/timestamp | [N] | timestamp in seconds:nanoseconds in simulation time |
/state/effector/position (gripper) | [N, 2] | left [:, 0] , right [:, 1] , gripper open range in mm |
/state/end/orientation | [N, 2, 4] | left [:, 0, :] , right [:, 1, :] , flange quaternion with wxyz |
/state/end/position | [N, 2, 3] | left [:, 0, :] , right [:, 1, :] , flange xyz in meters |
/state/joint/position | [N, 34] | joint position based on joint names |
/state/joint/velocity | [N, 34] | joint velocity based on joint names |
/state/joint/effort | [N, 34] | joint effort based on joint names |
/state/robot/orientation | [N, 4] | quaternion in wxyz |
/state/robot/position | [N, 3] | xyz position, where z is always 0 in meters |
/action/*/index | [M] | actions indexes refer to when the control source is actually sending signals |
/action/effector/position (gripper) | [N, 2] | left [:, 0] , right [:, 1] , gripper open range in mm |
/action/end/orientation | [N, 2, 4] | same as /state/end/orientation |
/action/end/position | [N, 2, 3] | same as /state/end/position |
/action/end/index | [M_2] | same as other indexes |
/action/joint/position | [N, 14] | same as /state/joint/position |
/action/joint/index | [M_4] | same as other indexes |
/action/robot/velocity | [N, 2] | vel along x axis [:, 0] , yaw rate [:, 1] |
/action/robot/index | [M_5] | same as other indexes |
License and Citation
All the data and code within this repo are under CC BY-NC-SA 4.0. Please consider citing our project if it helps your research.
@misc{contributors2024agibotworldrepo,
title={AgiBot World Colosseum},
author={AgiBot World Colosseum contributors},
howpublished={\url{https://github.com/OpenDriveLab/AgiBot-World}},
year={2024}
}
- Downloads last month
- 270