Datasets:
Tasks:
Audio Classification
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -25,18 +25,56 @@ configs:
|
|
25 |
**Version:** 0.1.0
|
26 |
**Created on:** 2025-04-12
|
27 |
**Creators:**
|
28 |
-
-
|
29 |
|
30 |
## Overview
|
31 |
|
32 |
-
BEANS-Zero is a **benchmark
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
## Usage
|
35 |
```python
|
36 |
import numpy as np
|
37 |
from datasets import load_dataset
|
38 |
|
39 |
-
ds = load_dataset("EarthSpeciesProject/BEANS-Zero", split="test")
|
40 |
|
41 |
# see the contents at a glance
|
42 |
print(ds)
|
@@ -59,6 +97,7 @@ idx = np.where(np.array(ds["dataset_name"]) == "esc50")[0]
|
|
59 |
esc50 = ds.select(idx)
|
60 |
print(esc50)
|
61 |
```
|
|
|
62 |
```python
|
63 |
# To stream the dataset instead of downloading it, first
|
64 |
ds = load_dataset("EarthSpeciesProject/BEANS-Zero", split="test", streaming=True)
|
@@ -68,64 +107,17 @@ for i, sample in enumerate(ds):
|
|
68 |
break
|
69 |
print(sample.keys())
|
70 |
```
|
71 |
-
## Dataset Composition
|
72 |
-
|
73 |
-
BEANS-Zero combines data from several well-known sources. There are total of 91965 samples (examples). It consists of two main groups:
|
74 |
-
|
75 |
-
### Already Part of the BEANS Benchmark
|
76 |
-
- **esc-50:** A labeled collection of 2000 environmental audio recordings (5 seconds each) spanning 50 classes. ([CC-BY-NC](http://dx.doi.org/10.1145/2733373.2806390))
|
77 |
-
- **Watkins:** Marine mammal sound recordings. (free for personal/academic uses)
|
78 |
-
- **CBI:** Cornell Birdcall Identification dataset. ([CC-BY-NC-SA](https://www.kaggle.com/competitions/birdsong-recognition/overview))
|
79 |
-
- **HumBugDB:** A large-scale, multi-species dataset of mosquito sounds. ([CC-BY](https://openreview.net/forum?id=vhjsBtq9OxO))
|
80 |
-
- **Enabirds:** Bird dawn chorus detection data. ([CC0](https://esajournals.onlinelibrary.wiley.com/doi/full/10.1002/ecy.3329))
|
81 |
-
- **HICEAS:** Marine mammal vocalizations from the Hawaiian Islands.
|
82 |
-
- **RFCX:** Recordings for bird and frog vocalizations in soundscape recordings (academic/research & non-commercial use).
|
83 |
-
- **Hainan Gibbons:** Automated detection of Hainan gibbon calls. ([CC-BY-NC-SA](https://doi.org/10.1002/rse2.201))
|
84 |
-
|
85 |
-
### Newly Added Subsets
|
86 |
-
- **Lifestage:** Annotations for classifying animal lifestages (adult, juvenile, nestling).
|
87 |
-
- **Call-type:** Labels distinguishing between call and song in animal sounds.
|
88 |
-
- **Unseen-species-cmn / sci / tax:** Tasks for classifying species common names, scientific names, and taxonomic names not seen during training.
|
89 |
-
- **Unseen-genus-cmn / sci / tax:** Tasks for classifying genus-level names (common, scientific, taxonomic) of unseen species.
|
90 |
-
- **Unseen-family-cmn / sci / tax:** Tasks for classifying family-level names (common, scientific, taxonomic) of unseen species.
|
91 |
-
- **Captioning:** English captions for bioacoustic recordings.
|
92 |
-
- **ZF-Indiv:** A zebra finch dataset with individual and call-type annotations, from [Elie et al 2020 (https://doi.org/10.6084/m9.figshare.11905533.v1)]
|
93 |
-
|
94 |
-
Each subset has its own metadata and licensing details. Each sample
|
95 |
-
|
96 |
-
## Sources
|
97 |
-
|
98 |
-
BEANS-Zero was assembled using data from:
|
99 |
-
- **Xeno-canto**
|
100 |
-
- **iNaturalist**
|
101 |
-
- **Animal Sound Archive**
|
102 |
-
- **Elie et al 2020**
|
103 |
-
- **ESC-50**
|
104 |
-
- **RFCX**
|
105 |
-
- **CBI**
|
106 |
-
- **HumBugDB**
|
107 |
-
- **Enabirds**
|
108 |
-
- **HICEAS**
|
109 |
-
- **Watkins**
|
110 |
-
- **Hainan Gibbons**
|
111 |
-
|
112 |
-
## Tasks and Applications
|
113 |
-
|
114 |
-
BEANS-Zero supports various research tasks:
|
115 |
-
- **Audio Classification:** Identify and categorize animal sounds.
|
116 |
-
- **Audio Detection:** Detect specific sound events in recordings.
|
117 |
-
- **Audio Captioning:** Generate natural language descriptions of bioacoustic recordings.
|
118 |
-
|
119 |
-
These tasks are particularly useful for exploring zero-shot learning applications in bioacoustics.
|
120 |
|
121 |
## Data Fields
|
|
|
122 |
The following fields are present in each example:
|
|
|
123 |
- **source_dataset** (str): One of the source datasets mentioned above
|
124 |
- **audio** (Sequence[float]): The audio data in float32 format. The audio is not decoded.
|
125 |
- **id** (str): Sample uuid.
|
126 |
-
- **created_at** (str): Sample creation datetime in
|
127 |
- **metadata** (str): Each sample can have a different duration and a different sample rate. `sample_rate = json.loads(sample["metadata"])["sample_rate"]`
|
128 |
-
- **file_name** (str):
|
129 |
- **instruction** (str): A prompt (a query) corresponding to the audio for your audio-text model with a placeholder for audio tokens. E.g. '<Audio><AudioHere></Audio> What is the scientific name for the focal species in the audio?'
|
130 |
- **instruction_text** (str): Same as **instruction** but without the placeholder for audio tokens.
|
131 |
- **output** (str): The expected output from the model
|
@@ -135,31 +127,27 @@ The following fields are present in each example:
|
|
135 |
|
136 |
## Licensing
|
137 |
|
138 |
-
Due to its composite nature, BEANS-Zero is subject to multiple licenses. Individual samples have the "license" field
|
139 |
-
indicating the specific license for that sample. The dataset is not intended for commercial use, and users should adhere to the licenses of the individual datasets.
|
140 |
|
141 |
## Citation
|
142 |
|
143 |
If you use BEANS-Zero, please cite the following:
|
144 |
|
145 |
```bibtex
|
146 |
-
@
|
147 |
-
title={
|
148 |
-
|
149 |
-
|
150 |
-
year={
|
|
|
151 |
}
|
152 |
```
|
153 |
|
154 |
-
## How to Use
|
155 |
-
|
156 |
-
BEANS-Zero is provided with a default configuration. Data files are belong to a sing
|
157 |
-
|
158 |
## Contact
|
159 |
|
160 |
For questions, comments, or contributions, please contact:
|
161 |
-
-
|
162 |
-
-
|
163 |
-
-
|
164 |
-
-
|
165 |
-
-
|
|
|
25 |
**Version:** 0.1.0
|
26 |
**Created on:** 2025-04-12
|
27 |
**Creators:**
|
28 |
+
- Earth Species Project (https://www.earthspecies.org)
|
29 |
|
30 |
## Overview
|
31 |
|
32 |
+
BEANS-Zero is a **bioacoustics** benchmark designed to evaluate multimodal audio-language models in zero-shot settings. Introduced in the paper NatureLM-audio paper ([Robinson et al., 2024](https://openreview.net/forum?id=hJVdwBpWjt)), it brings together tasks from both existing datasets and newly curated resources.
|
33 |
+
|
34 |
+
The benchmark focuses on models that take a bioacoustic audio input (e.g., bird or mammal vocalizations) and a text instruction (e.g., "What species is in this audio?"), and return a textual output (e.g., "Taeniopygia guttata"). As a zero-shot benchmark, BEANS-Zero contains only a test split—no training or in-context examples are provided.
|
35 |
+
|
36 |
+
Many tasks originate from the original [BEANS benchmark](https://arxiv.org/abs/2210.12300), but BEANS-Zero adds new datasets and task types that broaden the evaluation scope.
|
37 |
+
|
38 |
+
## Tasks and Applications
|
39 |
+
|
40 |
+
BEANS-Zero supports a wide range of zero-shot evaluation tasks, including:
|
41 |
+
- **Audio Classification** — Identify species or sound categories from animal vocalizations.
|
42 |
+
- **Audio Detection** — Detect the presence of species in long-form recordings.
|
43 |
+
- **Audio Captioning** — Generate natural language descriptions of acoustic scenes.
|
44 |
+
|
45 |
+
## Dataset Composition
|
46 |
+
|
47 |
+
BEANS-Zero combines data from several well-known sources. There are total of 91,965 samples (examples). It consists of two main groups:
|
48 |
+
|
49 |
+
### Original BEANS Tasks
|
50 |
+
|
51 |
+
- `esc-50`: Generic environmental sound classification with 50 labels ([Piczak, 2015](https://dl.acm.org/doi/10.1145/2733373.2806390), License: CC-BY-NC)
|
52 |
+
- `watkins`: Marine mammal species classification with 31 species ([Sayigh et al., 2016](https://asa.scitation.org/doi/abs/10.1121/2.0000358), free for personal and academic use)
|
53 |
+
- `cbi`: Bird species classification with 264 labels from the CornellBird Identification competition hosted on Kaggle ([Howard et al., 2020](https://kaggle.com/competitions/birdsong-recognition), License: CC-BY-NC-SA)
|
54 |
+
- `humbugdb`: Mosquito wingbeat sound classification into 14 species ([Kiskin et al., 2021](https://arxiv.org/abs/2110.07607), License: CC-BY)
|
55 |
+
- `enabirds`: Bird dawn chorus detection with 34 species ([Chronister et al., 2021](https://esajournals.onlinelibrary.wiley.com/doi/full/10.1002/ecy.3329), License: CC0)
|
56 |
+
- `hiceas`: Minke whale detection from the Hawaiian Islands Cetacean and Ecosystem Assessment Survey (HICEAS) ([NOAA, 2022](https://doi.org/10.25921/e12p-gj65), free without restriction)
|
57 |
+
- `rfcx`: Bird and frog detection from the Rainforest Connection(RFCx) data with 24 species ([LeBien et al., 2020]( https://www.sciencedirect.com/science/article/pii/S1574954120300637.), usage allowed for academic research)
|
58 |
+
- `gibbons`: Hainan gibbon detection with 3 call type labels ([Dufourq et al., 2021]((https://doi.org/10.1002/rse2.201)), License: CC-BY-NC-SA)
|
59 |
+
|
60 |
+
### Newly Added Subsets
|
61 |
+
|
62 |
+
- `unseen-species-*`: Unseen species classification with 200 species held out from AnimalSpeak ([Robinson et al., 2024](https://doi.org/10.1109/ICASSP48485.2024.10447250)), with each sub-dataset using common (`cmn`), scientific (`sci`), or taxonomic (`tax`) names
|
63 |
+
- `unseen-genus-*`: Generalize to unseen genera (`cmn`/`sci`/`tax`)
|
64 |
+
- `unseen-family-*`: Generalize to unseen families (`cmn`/`sci`/`tax`)
|
65 |
+
- `lifestage`: Predicting the lifestage of birds across multiple species (e.g., adult, juvenile), curated from [xeno-canto](https://xeno-canto.org/)
|
66 |
+
- `call-type`: Classifying song vs. call across multiple bird species, curated from [xeno-canto](https://xeno-canto.org/)
|
67 |
+
- `captioning`: Captioning bioacoustic audio on AnimalSpeak ([Robinson et al., 2024](https://doi.org/10.1109/ICASSP48485.2024.10447250))
|
68 |
+
- `zf-indv`: Determining whether a recording contains multiplezebra finches, using programmatically generated mixtures (1–4 individuals) ([Elie and Theunissen, 2020](https://doi.org/10.6084/m9.figshare.11905533.v1))
|
69 |
+
|
70 |
+
Each sample is labeled with its source dataset and license.
|
71 |
|
72 |
## Usage
|
73 |
```python
|
74 |
import numpy as np
|
75 |
from datasets import load_dataset
|
76 |
|
77 |
+
ds = load_dataset("EarthSpeciesProject/BEANS-Zero", split="test")
|
78 |
|
79 |
# see the contents at a glance
|
80 |
print(ds)
|
|
|
97 |
esc50 = ds.select(idx)
|
98 |
print(esc50)
|
99 |
```
|
100 |
+
|
101 |
```python
|
102 |
# To stream the dataset instead of downloading it, first
|
103 |
ds = load_dataset("EarthSpeciesProject/BEANS-Zero", split="test", streaming=True)
|
|
|
107 |
break
|
108 |
print(sample.keys())
|
109 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
110 |
|
111 |
## Data Fields
|
112 |
+
|
113 |
The following fields are present in each example:
|
114 |
+
|
115 |
- **source_dataset** (str): One of the source datasets mentioned above
|
116 |
- **audio** (Sequence[float]): The audio data in float32 format. The audio is not decoded.
|
117 |
- **id** (str): Sample uuid.
|
118 |
+
- **created_at** (str): Sample creation datetime in UTC
|
119 |
- **metadata** (str): Each sample can have a different duration and a different sample rate. `sample_rate = json.loads(sample["metadata"])["sample_rate"]`
|
120 |
+
- **file_name** (str): Audio file name
|
121 |
- **instruction** (str): A prompt (a query) corresponding to the audio for your audio-text model with a placeholder for audio tokens. E.g. '<Audio><AudioHere></Audio> What is the scientific name for the focal species in the audio?'
|
122 |
- **instruction_text** (str): Same as **instruction** but without the placeholder for audio tokens.
|
123 |
- **output** (str): The expected output from the model
|
|
|
127 |
|
128 |
## Licensing
|
129 |
|
130 |
+
Due to its composite nature, BEANS-Zero is subject to multiple licenses. Individual samples have the "license" field indicating the specific license for that sample. The dataset is not intended for commercial use, and users should adhere to the licenses of the individual datasets.
|
|
|
131 |
|
132 |
## Citation
|
133 |
|
134 |
If you use BEANS-Zero, please cite the following:
|
135 |
|
136 |
```bibtex
|
137 |
+
@inproceedings{robinson2025naturelm,
|
138 |
+
title = {NatureLM-audio: an Audio-Language Foundation Model for Bioacoustics},
|
139 |
+
author = {David Robinson and Marius Miron and Masato Hagiwara and Olivier Pietquin},
|
140 |
+
booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)},
|
141 |
+
year = {2025},
|
142 |
+
url = {https://openreview.net/forum?id=hJVdwBpWjt}
|
143 |
}
|
144 |
```
|
145 |
|
|
|
|
|
|
|
|
|
146 |
## Contact
|
147 |
|
148 |
For questions, comments, or contributions, please contact:
|
149 |
+
- David Robinson (david at earthspecies dot org)
|
150 |
+
- Marius Miron (marius at earthspecies dot org)
|
151 |
+
- Masato Hagiwara (masato at earthspecies dot org)
|
152 |
+
- Gagan Narula (gagan at earthspecies dot org)
|
153 |
+
- Milad Alizadeh (milad at earthspecies dot org)
|