immanuelpeter commited on
Commit
af9b9ab
·
verified ·
1 Parent(s): 15b8acc

Update README.md

Browse files

Refined GPT response

Files changed (1) hide show
  1. README.md +140 -2
README.md CHANGED
@@ -75,7 +75,7 @@ dataset_info:
75
  dtype: int32
76
  splits:
77
  - name: train
78
- num_bytes: 298274262201.0
79
  num_examples: 67000
80
  - name: validation
81
  num_bytes: 35503432435.4
@@ -84,7 +84,7 @@ dataset_info:
84
  num_bytes: 31770625008.6
85
  num_examples: 7200
86
  download_size: 361766155632
87
- dataset_size: 365548319645.0
88
  configs:
89
  - config_name: default
90
  data_files:
@@ -94,4 +94,142 @@ configs:
94
  path: data/validation-*
95
  - split: test
96
  path: data/test-*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
  dtype: int32
76
  splits:
77
  - name: train
78
+ num_bytes: 298274262201
79
  num_examples: 67000
80
  - name: validation
81
  num_bytes: 35503432435.4
 
84
  num_bytes: 31770625008.6
85
  num_examples: 7200
86
  download_size: 361766155632
87
+ dataset_size: 365548319645
88
  configs:
89
  - config_name: default
90
  data_files:
 
94
  path: data/validation-*
95
  - split: test
96
  path: data/test-*
97
+ license: mit
98
+ task_categories:
99
+ - object-detection
100
+ - image-classification
101
+ - image-segmentation
102
+ - depth-estimation
103
+ - video-classification
104
+ - any-to-any
105
+ - image-to-text
106
+ - reinforcement-learning
107
+ language:
108
+ - en
109
+ pretty_name: CARLA Autopilot Multimodal Dataset
110
+ size_categories:
111
+ - 10K<n<100K
112
  ---
113
+
114
+ # CARLA Autopilot Multimodal Dataset
115
+
116
+ This dataset contains synchronized multimodal driving data collected in the [CARLA simulator](https://carla.org/) using the autopilot feature. It provides RGB images from multiple cameras, semantic segmentation, LiDAR point clouds, 2D bounding boxes, and ego-vehicle state/control signals across varied weather, maps, and traffic densities.
117
+
118
+ The dataset is designed for research in **autonomous driving**, **sensor fusion**, **imitation learning**, and **self-driving evaluation**.
119
+
120
+ ---
121
+
122
+ ## Dataset Summary
123
+
124
+ - **Runs**: 30 autopilot runs
125
+ - **Sensors**:
126
+ - RGB cameras: front, front-left, front-right, rear (800×600, fov=90°)
127
+ - Semantic segmentation: front (raw + colorized)
128
+ - LiDAR: 32-channel ray-cast, 20 Hz, 80 m range
129
+ - Collision sensor for impact logs
130
+ - **Annotations**: 2D bounding boxes and class labels (vehicles, pedestrians) w.r.t front camera
131
+ - **Ego states**: position, rotation, velocity, control (throttle/steer/brake), speed (km/h)
132
+ - **Environment**: varied weather, time-of-day (sun altitude), NPC traffic (vehicles + pedestrians)
133
+
134
+ **Splits**
135
+ - Train: 67,000 frames
136
+ - Validation: 8,400 frames
137
+ - Test: 7,200 frames
138
+ - Total size: ~365 GB
139
+
140
+ ---
141
+
142
+ ## Relation to Previous Versions
143
+
144
+ This dataset, **CARLA Autopilot Multimodal Dataset**, is an extension of the earlier
145
+ [CARLA Autopilot Image Dataset](https://huggingface.co/datasets/your-username/carla-autopilot-images).
146
+
147
+ - **Previous version (`carla-autopilot-images`)**:
148
+ Contained synchronized RGB camera views (front, front-left, front-right, rear) with ego-vehicle states, controls, and environment metadata.
149
+
150
+ - **Current version (`carla-autopilot-multimodal-dataset`)**:
151
+ Adds **new sensor modalities and richer annotations**, including:
152
+ - Semantic segmentation (front view)
153
+ - LiDAR point clouds
154
+ - 2D bounding boxes and labels (vehicles, pedestrians)
155
+ - Expanded metadata (collisions, weather difficulty, quality metrics)
156
+
157
+ In short, `v2` augments the original dataset with **multimodal signals for perception + sensor fusion research**,
158
+ while retaining full compatibility with the core camera + state data from `v1`.
159
+
160
+ ---
161
+
162
+ ## Features
163
+
164
+ Each sample contains:
165
+
166
+ - `run_id` (string): Identifier for the simulation run
167
+ - `frame` (int): Frame number
168
+ - `timestamp` (float): Relative timestamp (s)
169
+ - `image_front`, `image_front_left`, `image_front_right`, `image_rear` (images): RGB views
170
+ - `seg_front` (image): Semantic segmentation (front view)
171
+ - `lidar` (list[list[float32]]): LiDAR point cloud (x, y, z, intensity)
172
+ - `boxes` (list[list[float32]]): 2D bounding boxes in `[xmin, ymin, xmax, ymax]` format
173
+ - `box_labels` (list[string]): Class labels for bounding boxes
174
+ - `location_{x,y,z}` (float): Ego position in world coords
175
+ - `rotation_{pitch,yaw,roll}` (float): Ego rotation
176
+ - `velocity_{x,y,z}` (float): Ego velocity (m/s)
177
+ - `speed_kmh` (float): Ego speed (km/h)
178
+ - `throttle`, `steer`, `brake` (float): Control inputs
179
+ - `nearby_vehicles_50m`, `total_npc_vehicles`, `total_npc_walkers` (int): Traffic counts
180
+ - `map_name` (string): CARLA map used
181
+ - `weather_*` (float): Weather conditions (cloudiness, precipitation, fog, sun altitude)
182
+ - `vehicles_spawned`, `walkers_spawned` (int): Number of NPCs
183
+ - `duration_seconds` (int): Total run length in seconds
184
+
185
+ ---
186
+
187
+ ## Example Usage
188
+
189
+ ```python
190
+ from datasets import load_dataset
191
+
192
+ ds = load_dataset("immanuelpeter/carla-autopilot-multimodal-dataset", split="train")
193
+ sample = ds[0]
194
+
195
+ # Access RGB image and LiDAR
196
+ front_img = sample["image_front"]
197
+ lidar = sample["lidar"]
198
+ boxes = sample["boxes"]
199
+ ````
200
+
201
+ ---
202
+
203
+ ## Collection Process
204
+
205
+ Data was collected using a custom CARLA Python script that:
206
+
207
+ * Spawns an ego vehicle with autopilot enabled
208
+ * Spawns configurable NPC vehicles and pedestrians
209
+ * Randomizes weather and lighting conditions per run
210
+ * Synchronizes all sensors and saves every *N*-th frame
211
+ * Records vehicle state, control signals, collisions, and environment statistics
212
+
213
+ All sensors operate in synchronous mode for frame alignment.
214
+
215
+ ---
216
+
217
+ ## Intended Use
218
+
219
+ * Training and benchmarking multimodal self-driving models
220
+ * Research on sensor fusion, perception, and planning
221
+ * Imitation learning from autopilot trajectories
222
+ * Evaluation under diverse weather and traffic conditions
223
+
224
+ <!-- ## Citation
225
+
226
+ If you use this dataset, please cite:
227
+
228
+ ```
229
+ @dataset{yourname2025carlaautopilot,
230
+ author = {Your Name},
231
+ title = {CARLA Autopilot Multimodal Dataset},
232
+ year = {2025},
233
+ howpublished = {\url{https://huggingface.co/datasets/your-username/carla-autopilot-multimodal-dataset}}
234
+ }
235
+ ``` -->