Adds instructions and details to the README
Browse files
README.md
CHANGED
@@ -1,3 +1,106 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
task_categories:
|
4 |
+
- robotics
|
5 |
+
---
|
6 |
+
|
7 |
+
# Colosseum Dataset Card
|
8 |
+
|
9 |
+
This dataset contains demonstrations for training and testing Imitation Learning
|
10 |
+
based policies, taken from our simulation benchmark [`Colosseum`][0], which is
|
11 |
+
based on `RLBench`. The benchmark consists of 20 tasks from the RLBench suite.
|
12 |
+
We implement [variations][1] for each task, like `camera pose`, which try to
|
13 |
+
test generalization capabilities.
|
14 |
+
|
15 |
+
## Dataset details
|
16 |
+
|
17 |
+
The **training set** consits of 100 demonstrations of the 20 tasks without any
|
18 |
+
variation factor (the vanilla version of the RLBench tasks). Each demonstration
|
19 |
+
consists of frame data from the following 4 camera views:
|
20 |
+
|
21 |
+
- Front camera
|
22 |
+
- Left shoulder camera
|
23 |
+
- Right shoulder camera
|
24 |
+
- Wrist camera
|
25 |
+
|
26 |
+

|
27 |
+
|
28 |
+
For each camera view we collect the following data:
|
29 |
+
|
30 |
+
- RGB
|
31 |
+
- Depth
|
32 |
+
|
33 |
+
**Note**: each frame is recorded at `128 x 128` resolution.
|
34 |
+
|
35 |
+
The **test set** consits of 25 demonstrations of the 20 tasks, each for the
|
36 |
+
factors of variations that are applicable to that task. Each step collects data
|
37 |
+
from the same 4 camera views, at the same resolution.
|
38 |
+
|
39 |
+
## Dataset structure
|
40 |
+
|
41 |
+
The data is distributed as `tar.gz` files. After downloading each tar and
|
42 |
+
extracting it into a local folder, you'll get a folder structure like the
|
43 |
+
following (e.g. for the task `stack_cups`):
|
44 |
+
|
45 |
+

|
46 |
+
|
47 |
+
Each folder contains a suffix (`idx`), which indicates which variation factor
|
48 |
+
was applied to the simulation, e.g. `idx=0` means **no variations**, whereas
|
49 |
+
`idx=2` means **Object Color variation applied to the Manipulated Object**. You
|
50 |
+
can find a spreadsheet [here][2] with the tasks `idx` for each of the 20 tasks.
|
51 |
+
You can also find what variations are applicable to that task, as it could be
|
52 |
+
that some variations are not active for some task combination.
|
53 |
+
|
54 |
+
The pickle file `variation_description.pkl` contains the language instructions
|
55 |
+
for that task. Below we go deeper into the folder structure for one of the
|
56 |
+
variations. Notice there is a set of folders per each episode/demonstration, and
|
57 |
+
on each folder there are extra folders for each camera view and type of image.
|
58 |
+
There's also a pickle file `low_dim_obs.pkl` with the low dimensional observation
|
59 |
+
saved by RLBench. The info stored in this pickle comes from [this][3] config
|
60 |
+
file in RLBench.
|
61 |
+
|
62 |
+

|
63 |
+
|
64 |
+
## Downloading the dataset using wget and a download link
|
65 |
+
|
66 |
+
1. Go to the HuggingFace repo and select the files option:
|
67 |
+
|
68 |
+

|
69 |
+
|
70 |
+
2. Select the task you want to get:
|
71 |
+
|
72 |
+

|
73 |
+
|
74 |
+
3. Get the download link:
|
75 |
+
|
76 |
+

|
77 |
+
|
78 |
+
4. Use `curl` or `wget` to get the tar file:
|
79 |
+
|
80 |
+
```bash
|
81 |
+
wget YOUR_DOWNLOAD_LINK
|
82 |
+
```
|
83 |
+
|
84 |
+
|
85 |
+
## Resources for more information
|
86 |
+
|
87 |
+
- Paper: https://arxiv.org/abs/2402.08191
|
88 |
+
- Benchmark Code: https://github.com/robot-colosseum/robot-colosseum
|
89 |
+
- Website: https://robot-colosseum.github.io
|
90 |
+
|
91 |
+
## Citation
|
92 |
+
|
93 |
+
If you find our work helpful, please consider citing our paper.
|
94 |
+
```
|
95 |
+
@article{pumacay2024colosseum,
|
96 |
+
title = {THE COLOSSEUM: A Benchmark for Evaluating Generalization for Robotic Manipulation},
|
97 |
+
author = {Pumacay, Wilbert and Singh, Ishika and Duan, Jiafei and Krishna, Ranjay and Thomason, Jesse and Fox, Dieter},
|
98 |
+
booktitle = {arXiv preprint arXiv:2402.08191},
|
99 |
+
year = {2024},
|
100 |
+
}
|
101 |
+
```
|
102 |
+
|
103 |
+
[0]: <https://robot-colosseum.readthedocs.io/en/latest/overview.html> (colosseum-overview)
|
104 |
+
[1]: <https://robot-colosseum.readthedocs.io/en/latest/overview.html#perturbations> (colosseum-perturbations)
|
105 |
+
[2]: <https://docs.google.com/spreadsheets/d/175cCG9qHzNB6axSno6K2NjQ9gjpbCqNK9GCi-SAQkCM/edit?usp=sharing> (colosseum-tasks-distribution)
|
106 |
+
[3]: <https://github.com/MohitShridhar/RLBench/blob/ad991951bc53e4f3b73b803a75cf4b7d55295cf7/rlbench/observation_config.py#L73> (rlbench-task-lowdim)
|