Datasets:
Tasks:
Audio Classification
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
License:
small fixes
Browse files
README.md
CHANGED
@@ -29,7 +29,7 @@ configs:
|
|
29 |
|
30 |
## Overview
|
31 |
|
32 |
-
BEANS-Zero is a **bioacoustics** benchmark designed to evaluate multimodal audio-language models in zero-shot settings. Introduced in the paper NatureLM-audio paper ([Robinson et al.,
|
33 |
|
34 |
The benchmark focuses on models that take a bioacoustic audio input (e.g., bird or mammal vocalizations) and a text instruction (e.g., "What species is in this audio?"), and return a textual output (e.g., "Taeniopygia guttata"). As a zero-shot benchmark, BEANS-Zero contains only a test split—no training or in-context examples are provided.
|
35 |
|
@@ -65,7 +65,7 @@ BEANS-Zero combines data from several well-known sources. There are total of 91,
|
|
65 |
- `lifestage`: Predicting the lifestage of birds across multiple species (e.g., adult, juvenile), curated from [xeno-canto](https://xeno-canto.org/)
|
66 |
- `call-type`: Classifying song vs. call across multiple bird species, curated from [xeno-canto](https://xeno-canto.org/)
|
67 |
- `captioning`: Captioning bioacoustic audio on AnimalSpeak ([Robinson et al., 2024](https://doi.org/10.1109/ICASSP48485.2024.10447250))
|
68 |
-
- `zf-indv`: Determining whether a recording contains multiplezebra finches, using programmatically generated mixtures (1–4 individuals) ([Elie and Theunissen,
|
69 |
|
70 |
Each sample is labeled with its source dataset and license.
|
71 |
|
@@ -80,13 +80,13 @@ ds = load_dataset("EarthSpeciesProject/BEANS-Zero", split="test")
|
|
80 |
print(ds)
|
81 |
```
|
82 |
```python
|
83 |
-
# get audio for
|
84 |
audio = np.array(ds[0]["audio"])
|
85 |
print(audio.shape)
|
86 |
|
87 |
# get the instruction (prompt / query) for that sample
|
88 |
print(ds[0]["instruction_text"])
|
89 |
-
# the desired output (*only* used for
|
90 |
print(ds[0]["output"])
|
91 |
|
92 |
# the component datasets of BEANS-Zero are:
|
@@ -113,15 +113,15 @@ print(sample.keys())
|
|
113 |
The following fields are present in each example:
|
114 |
|
115 |
- **source_dataset** (str): One of the source datasets mentioned above
|
116 |
-
- **audio** (Sequence[float]): The audio data
|
117 |
- **id** (str): Sample uuid.
|
118 |
- **created_at** (str): Sample creation datetime in UTC
|
119 |
-
- **metadata** (str): Each sample can have a different duration and a different sample rate.
|
120 |
- **file_name** (str): Audio file name
|
121 |
- **instruction** (str): A prompt (a query) corresponding to the audio for your audio-text model with a placeholder for audio tokens. E.g. '<Audio><AudioHere></Audio> What is the scientific name for the focal species in the audio?'
|
122 |
- **instruction_text** (str): Same as **instruction** but without the placeholder for audio tokens.
|
123 |
- **output** (str): The expected output from the model
|
124 |
-
- **task** (str): The task type e.g. classification / detection /
|
125 |
- **dataset_name** (str): Names corresponding to the evaluation tasks, e.g. 'esc50' or 'unseen-family-sci'.
|
126 |
- **license** (str): The license of the dataset. For example, 'CC-BY-NC' or 'CC0'.
|
127 |
|
|
|
29 |
|
30 |
## Overview
|
31 |
|
32 |
+
BEANS-Zero is a **bioacoustics** benchmark designed to evaluate multimodal audio-language models in zero-shot settings. Introduced in the paper NatureLM-audio paper ([Robinson et al., 2025](https://openreview.net/forum?id=hJVdwBpWjt)), it brings together tasks from both existing datasets and newly curated resources.
|
33 |
|
34 |
The benchmark focuses on models that take a bioacoustic audio input (e.g., bird or mammal vocalizations) and a text instruction (e.g., "What species is in this audio?"), and return a textual output (e.g., "Taeniopygia guttata"). As a zero-shot benchmark, BEANS-Zero contains only a test split—no training or in-context examples are provided.
|
35 |
|
|
|
65 |
- `lifestage`: Predicting the lifestage of birds across multiple species (e.g., adult, juvenile), curated from [xeno-canto](https://xeno-canto.org/)
|
66 |
- `call-type`: Classifying song vs. call across multiple bird species, curated from [xeno-canto](https://xeno-canto.org/)
|
67 |
- `captioning`: Captioning bioacoustic audio on AnimalSpeak ([Robinson et al., 2024](https://doi.org/10.1109/ICASSP48485.2024.10447250))
|
68 |
+
- `zf-indv`: Determining whether a recording contains multiplezebra finches, using programmatically generated mixtures (1–4 individuals) ([Elie and Theunissen, 2016](https://doi.org/10.6084/m9.figshare.11905533.v1))
|
69 |
|
70 |
Each sample is labeled with its source dataset and license.
|
71 |
|
|
|
80 |
print(ds)
|
81 |
```
|
82 |
```python
|
83 |
+
# get audio for the first sample in the dataset, the 0th index
|
84 |
audio = np.array(ds[0]["audio"])
|
85 |
print(audio.shape)
|
86 |
|
87 |
# get the instruction (prompt / query) for that sample
|
88 |
print(ds[0]["instruction_text"])
|
89 |
+
# the desired output (should *only* be used for evaluation)
|
90 |
print(ds[0]["output"])
|
91 |
|
92 |
# the component datasets of BEANS-Zero are:
|
|
|
113 |
The following fields are present in each example:
|
114 |
|
115 |
- **source_dataset** (str): One of the source datasets mentioned above
|
116 |
+
- **audio** (Sequence[float]): The audio data as a list of floats.
|
117 |
- **id** (str): Sample uuid.
|
118 |
- **created_at** (str): Sample creation datetime in UTC
|
119 |
+
- **metadata** (str): Each sample can have a different duration (in seconds) and a different sample rate (in Hz). The 'metadata' is a JSON string containing these two fields.
|
120 |
- **file_name** (str): Audio file name
|
121 |
- **instruction** (str): A prompt (a query) corresponding to the audio for your audio-text model with a placeholder for audio tokens. E.g. '<Audio><AudioHere></Audio> What is the scientific name for the focal species in the audio?'
|
122 |
- **instruction_text** (str): Same as **instruction** but without the placeholder for audio tokens.
|
123 |
- **output** (str): The expected output from the model
|
124 |
+
- **task** (str): The task type e.g. classification / detection / captioning.
|
125 |
- **dataset_name** (str): Names corresponding to the evaluation tasks, e.g. 'esc50' or 'unseen-family-sci'.
|
126 |
- **license** (str): The license of the dataset. For example, 'CC-BY-NC' or 'CC0'.
|
127 |
|