Datasets:
File size: 13,031 Bytes
cc9c36a 4b42e71 d9d236c 1d64a51 d42f72b 4469ce0 1d64a51 d42f72b cc9c36a 4b42e71 1f4c004 cc9c36a 1f4c004 d42f72b cc9c36a 4469ce0 cc9c36a d254e9b da261a4 cc9c36a 1d64a51 cc9c36a bc2367f cc9c36a 687a219 cc9c36a da261a4 d9d236c cc9c36a 4469ce0 d42f72b caa3514 cc9c36a d254e9b bc2367f fe06dd5 bc2367f fe06dd5 bc2367f fe06dd5 bc2367f 1d64a51 d254e9b 1d64a51 d254e9b bc2367f a9590f4 d254e9b bc2367f d254e9b 1d64a51 a9590f4 bc2367f d254e9b 1d64a51 d254e9b bc2367f 1d64a51 d254e9b 1d64a51 d254e9b 3af4672 1d64a51 3af4672 bc2367f 3af4672 a9590f4 3af4672 1d64a51 3af4672 a9590f4 3af4672 1d64a51 3af4672 bc2367f 3af4672 a9590f4 bc2367f eb91fea bc2367f eb91fea bc2367f eb91fea bc2367f d254e9b cc9c36a d42f72b cc9c36a 1d64a51 cc9c36a 1d64a51 bc2367f d42f72b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 |
---
license: cc-by-sa-4.0
size_categories:
- 1M<n<10M
pretty_name: TerraMesh
viewer: false
tags:
- Earth observation
- Multimodal
- Pre-training
task_categories:
- image-feature-extraction
library_name: webdataset
---
# TerraMesh
> **A planetary‑scale, multimodal analysis‑ready dataset for Earth‑Observation foundation models.**
Paper: [TerraMesh: A Planetary Mosaic of Multimodal Earth Observation Data](https://huggingface.co/papers/2504.11172)
**TerraMesh** merges data from **Sentinel‑1 SAR, Sentinel‑2 optical, Copernicus DEM, NDVI and land‑cover** sources into more than **9 million co‑registered patches** ready for large‑scale representation learning.
**Dataset to be released soon.**

Samples from the TerraMesh dataset with seven spatiotemporal aligned modalities. Sentinel-2 L2A uses IRRG pseudo-coloring and Sentinel-1 RTC is visualized in db scale as VH-VV-VV/VH. Copernicus DEM is scaled based on the image value range with an additional 10 meter buffer to highlight flat scenes.
---
## Dataset organisation
The archive ships two top‑level splits `train/` and `val/`, each holding one folder per modality. `terramesh.py` includes code for data loading, see [Usage](#Usage).
```text
TerraMesh
├── train
│ ├── DEM
│ ├── LULC
│ ├── NDVI
│ ├── S1GRD
│ ├── S1RTC
│ ├── S2L1C
│ ├── S2L2A
│ └── S2RGB
├── val
│ ├── DEM
│ └── ...
└── terramesh.py
```
Each folder includes up to 889 shard files, containing up to 10240 samples each. Samples from MajorTom-Core are stored in shards with the pattern `majortom_{split}_{id}.tar` while shards with SSL4EO-S12 samples start with `ssl4eos12_`.
Samples are stored as Zarr Zip files which can be loaded with `zarr` (Version <= 2.18) or `xarray.load_zarr()`. Each sample location includes seven modalities that share the same shard and sample name. Note that each sample only inludes one Sentinel-1 version (S1GRD or S1RTC) because of different processing versions in the source datasets.
Each Zarr file includes aligned metadata as demonstrated by this S1GRD example from sample `ssl4eos12_val_0080385.zarr.zip`:
```
<xarray.Dataset> Size: 283kB
Dimensions: (band: 2, time: 1, y: 264, x: 264)
Coordinates:
* band (band) <U2 16B "vv" "vh"
sample <U9 36B "0194630_1"
spatial_ref int64 8B 0
* time (time) datetime64[ns] 8B 2020-05-03T02:07:17
* x (x) float64 2kB 6.004e+05 6.004e+05 ... 6.03e+05 6.03e+05
* y (y) float64 2kB 4.275e+06 4.275e+06 ... 4.273e+06 4.273e+06
Data variables:
bands (time, band, y, x) float16 279kB -9.461 -10.77 ... -16.67
center_lat float64 8B 38.61
center_lon float64 8B -121.8
crs int64 8B 32610
file_id (time) <U67 268B "S1A_IW_GRDH_1SDV_20201105T020809_20201105T...
```
Sentinel-2 modalities and LULC additionally provide a `cloud_mask` as additional metadata.
---
## Description
TerraMesh fuses complementary optical, radar, topographic and thematic layers into pixel‑aligned 10 m cubes, allowing models to learn joint representations of land cover, vegetation dynamics and surface structure at planetary scale.
The dataset is globally distributed and covers multiple years.
Heat map of the sample count in a one-degree grid. | Monthly distribution of all S-2 timestamps.
:-------------------------:|:-------------------------:
 | 
---
## Performance evaluation

TerraMesh was used to pre-train [TerraMind-B](https://huggingface.co/ibm-esa-geospatial/TerraMind-1.0-base).
On the six evaluated segmentation tasks from PANGAEA bench, TerraMind‑B reaches an average mIoU of 66.6%, the best overall score with an average rank of 2.33. This amounts to roughly a 3pp improvement over the next‑best open model (CROMA), underscoring the benefits of pre‑training on TerraMesh.
Compared to an ablation model pre-trained only on SSL4EO-S12 locations TerraMind-B performs overall 1pp better with better global generalization on more remote tasks like CTM-SS.
More details in our [paper](https://huggingface.co/papers/2504.11172).
---
## Usage
### Setup
Install the required packages with:
```
pip install huggingface_hub webdataset torch numpy albumentations fsspec braceexpand zarr==2.18.0 numcodecs==0.15.1
```
Important! The dataset was created using `zarr==2.18.0` and `numcodecs==0.15.1`. Unfortunately, Zarr 3.0 has backwards compatibility issues, and Zarr 2.18 is incompatible with NumCodecs >= 0.16.
### Download
You can download the dataset with the Hugging Face CLI tool. Please note that the dataset requires 16TB or storage.
```shell
huggingface-cli download ibm-esa-geospatial/TerraMesh --repo-type dataset --local-dir data/TerraMesh
```
If you like to download only a subset of the data, you can specify it with `--include`.
```
# Only download val data
huggingface-cli download ibm-esa-geospatial/TerraMesh --repo-type dataset --include "val/*" --local-dir data/TerraMesh
# Only download a single modality (e.g., S2L2A)
huggingface-cli download ibm-esa-geospatial/TerraMesh --repo-type dataset --include "*/S2L2A/*" --local-dir data/TerraMesh
```
### Data loader
We provide the data loading code in `terramesh.py` which is downloaded together with the dataset. For development use streaming, you can download the file via this [link](https://huggingface.co/datasets/ibm-esa-geospatial/TerraMesh/resolve/main/terramesh.py) or with:
```
wget https://huggingface.co/datasets/ibm-esa-geospatial/TerraMesh/resolve/main/terramesh.py
```
You can use the `build_terramesh_dataset` function to initalize a dataset, which uses the WebDataset package to load samples from the shard files. You can stream the data from Hugging Face using the urls or download the full dataset and pass a local path (e.g, `data/TerraMesh/`).
```python
from terramesh import build_terramesh_dataset
from torch.utils.data import DataLoader
# If you only pass one modality, the modality is loaded with the "image" key
dataset = build_terramesh_dataset(
path="https://huggingface.co/datasets/ibm-esa-geospatial/TerraMesh/resolve/main/", # Streaming or local path
modalities=["S2L2A"],
split="val",
shuffle=False, # Set false for split="val"
batch_size=8
)
# Batch keys: ["__key__", "__url__", "image"]
# If you pass multiple modalities, the modalities are returned using the modality names as keys
dataset = build_terramesh_dataset(
path="https://huggingface.co/datasets/ibm-esa-geospatial/TerraMesh/resolve/main/", # Streaming or local path
modalities=["S2L2A", "S2L1C", "S2RGB", "S1GRD", "S1RTC", "DEM", "NDVI", "LULC"],
shuffle=False, # Set false for split="val"
split="val",
batch_size=8
)
# Set batch size to None because batching is handled by WebDataset.
dataloader = DataLoader(dataset, batch_size=None, num_workers=4)
# Iterate over the dataloader
for batch in dataloader:
print("Batch keys:", list(batch.keys()))
# Batch keys: ["__key__", "__url__", "S2L2A", "S2L1C", "S2RGB", "S1RTC", "DEM", "NDVI", "LULC"]
# Because S1RTC and S1GRD are not present for all samples, each batch only includes one S1 version.
print("Data shape:", batch["S2L2A"].shape)
# Data shape: torch.Size([8, 12, 264, 264]
# Dimensions [batch, channel, h, w]. The code removes the time dim from the source data.
break
```
### Data transform
We provide some additional code for wrapping `albumentations` transform functions.
We recommend albumentations because parameters are shared between all image modalities (e.g., same random crop).
However, it requires some wrapping to bring the data into the expected shape.
```python
import albumentations as A
from albumentations.pytorch import ToTensorV2
from terramesh import build_terramesh_dataset, Transpose, MultimodalTransforms, MultimodalNormalize, statistics
# Define all image modalities
modalities = ["S2L2A", "S2L1C", "S2RGB", "S1GRD", "S1RTC", "DEM", "NDVI", "LULC"]
# Define multimodal transform function that converts the data into the expected shape from albumentations
val_transform = MultimodalTransforms(
transforms=A.Compose([ # We use albumentations because of the shared transform between image modalities
Transpose([1, 2, 0]), # Convert data to channel last (expected shape from albumentations)
MultimodalNormalize(mean=statistics["mean"], std=statistics["std"]),
A.CenterCrop(224, 224), # Use center crop in val split
# A.RandomCrop(224, 224), # Use random crop in train split
# A.D4(), # Optionally, use random flipping and rotation for the train split
ToTensorV2(), # Convert to tensor and back to channel first
],
is_check_shapes=False, # Not needed because of aligned data in TerraMesh
additional_targets={m: "image" for m in modalities}
),
non_image_modalities=["__key__", "__url__"], # Additional non-image keys
)
dataset = build_terramesh_dataset(
path="https://huggingface.co/datasets/ibm-esa-geospatial/TerraMesh/resolve/main/",
modalities=modalities,
split="val",
transform=val_transform,
batch_size=8,
)
```
If you only use a single modality, you don't need to specify `additional_targets`. You need to change the normalization to:
```
MultimodalNormalize(
mean={"image": statistics["mean"]["<modality>"]},
std={"image": statistics["std"]["<modality>"]}
),
```
### Returning metadata
You can pass `return_metadata=True` to `build_terramesh_dataset()` to load center longitude and latitude, timestamps, and the S2 cloud mask as additional metadata.
The resulting batch keys include: `["__key__", "__url__", "S2L2A", "S1RTC", ..., "center_lon", "center_lat", "cloud_mask", "time_S2L2A", "time_S1RTC", ...]`.
Therefore, you need to update the `transform` if you use one:
```python
...
additional_targets={m: "image" for m in modalities + ["cloud_mask"]}
),
non_image_modalities=["__key__", "__url__", "center_lon", "center_lat"] + ["time_" + m for m in modalities]
```
For a single modality dataset, "time" does not have a suffix and the following changes for the `transform` are required:
```python
...
additional_targets={"cloud_mask": "image"}
),
non_image_modalities=["__key__", "__url__", "center_lon", "center_lat", "time"]
```
Note that center points are not updated when random crop is used.
The cloud mask provides the classes land (0), water (1), snow (2), thin cloud (3), thick cloud (4), and cloud shadow (5), and no data (6).
DEM does not return a time value while LULC uses the S2 timestamp because of the augmentation usign the S2 cloud and ice mask. Time values are returned as integer values but can be converted back to datetime with
```python
batch["time_S2L2A"].numpy().astype("datetime64[ns]")
```
If you have any issues with data loading, please create a discussion in the community tab and tag `@blumenstiel`.
---
## Citation
If you use TerraMesh, please cite:
```bibtex
@article{blumenstiel2025terramesh,
title={Terramesh: A planetary mosaic of multimodal earth observation data},
author={Blumenstiel, Benedikt and Fraccaro, Paolo and Marsocci, Valerio and Jakubik, Johannes and Maurogiovanni, Stefano and Czerkawski, Mikolaj and Sedona, Rocco and Cavallaro, Gabriele and Brunschwiler, Thomas and Bernabe-Moreno, Juan and others},
journal={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
year={2025},
url={https://huggingface.co/papers/2504.11172}
}
```
---
## License
TerraMesh is released under the **Creative Commons Attribution‑ShareAlike 4.0 (CC‑BY‑SA‑4.0)** license.
---
## Acknowledgements
TerraMesh is part of the **FAST‑EO** project funded by the European Space Agency Φ‑Lab (contract #4000143501/23/I‑DT).
The satellite data (S2L1C, S2L2A, S1GRD, S1RTC) is sourced from the [SSL4EO‑S12 v1.1](https://huggingface.co/datasets/embed2scale/SSL4EO-S12-v1.1) (CC-BY-4.0) and [MajorTOM‑Core](https://huggingface.co/Major-TOM) (CC-BY-SA-4.0) datasets.
The LULC data is provided by [ESRI, Impact Observatory, and Microsoft](https://planetarycomputer.microsoft.com/dataset/io-lulc-annual-v02) (CC-BY-4.0).
The cloud masks used for augmentating the LULC maps and provided as metadata are produced using the [SEnSeIv2](https://github.com/aliFrancis/SEnSeIv2/tree/main?tab=readme-ov-file) model.
The DEM data is produced using [Copernicus WorldDEM-30](https://dataspace.copernicus.eu/explore-data/data-collections/copernicus-contributing-missions/collections-description/COP-DEM) © DLR e.V. 2010-2014 and © Airbus Defence and Space GmbH 2014-2018 provided under COPERNICUS by the European Union and ESA; all rights reserved |