Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -50,7 +50,7 @@ The research code and further information is available at the [Github Repository
|
|
| 50 |
|
| 51 |
<!-- Provide the basic links for the dataset. -->
|
| 52 |
|
| 53 |
-
- **Repository:** [BIRDeep Neural Networks](https://github.com/GrunCrow/BIRDeep_NeuralNetworks)
|
| 54 |
- **Paper:** Decoding the Sounds of Doñana: Advancements in Bird Detection and Identification Through Deep Learning
|
| 55 |
|
| 56 |
## Uses
|
|
@@ -63,12 +63,6 @@ The research code and further information is available at the [Github Repository
|
|
| 63 |
|
| 64 |
The dataset is intended for use in training and evaluating models for bird vocalization detection and identification. It can be used to automate the annotation of these recordings, facilitating relevant ecological studies.
|
| 65 |
|
| 66 |
-
### Out-of-Scope Use
|
| 67 |
-
|
| 68 |
-
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
|
| 69 |
-
|
| 70 |
-
The dataset should not be used for purposes unrelated to bird vocalization detection and identification.
|
| 71 |
-
|
| 72 |
## Dataset Structure
|
| 73 |
|
| 74 |
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
|
@@ -79,13 +73,32 @@ The distribution of the 38 different classes through the 3 subdatasets (train, v
|
|
| 79 |
|
| 80 |

|
| 81 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 82 |
## Dataset Creation
|
| 83 |
|
| 84 |
### Curation Rationale
|
| 85 |
|
| 86 |
<!-- Motivation for the creation of this dataset. -->
|
| 87 |
|
| 88 |
-
The dataset was created to improve the accuracy and efficiency of bird species identification using deep learning models. It addresses the challenge of managing large datasets of acoustic recordings for identifying species of interest in ecoacoustics studies.
|
| 89 |
|
| 90 |
### Source Data
|
| 91 |
|
|
@@ -95,13 +108,13 @@ The dataset was created to improve the accuracy and efficiency of bird species i
|
|
| 95 |
|
| 96 |
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
| 97 |
|
| 98 |
-
Audio recordings were collected from three main habitats across nine different locations within Doñana National Park using automatic audio recorders (AudioMoths). Approximately 500 minutes of audio data were annotated, prioritizing times when birds
|
| 99 |
|
| 100 |
The distribution of the recorders is as follows:
|
| 101 |
|
| 102 |

|
| 103 |
|
| 104 |
-
|
| 105 |
|
| 106 |
| Number | Habitat | Place Name | Recorder | Lat | Lon | Installation Date |
|
| 107 |
|--------|------------|-------------------|----------|------------|--------------|-------------------|
|
|
@@ -115,7 +128,7 @@ Where the name of the places correspond to the following recorders and coordinat
|
|
| 115 |
| Site 8 | marshland | Cancela Millán | AM15 | 37.0563889 | -6.6025 | 03/02/2023 |
|
| 116 |
| Site 9 | marshland | Juncabalejo | AM16 | 36.9361111 | -6.378333333 | 03/02/2023 |
|
| 117 |
|
| 118 |
-
|
| 119 |
|
| 120 |
|
| 121 |
#### Data producers
|
|
@@ -140,12 +153,6 @@ Annotations were made manually by experts, resulting in 3749 annotations represe
|
|
| 140 |
- Eduardo Santamaría García
|
| 141 |
- Giulia Bastianelli
|
| 142 |
|
| 143 |
-
#### Personal and Sensitive Information
|
| 144 |
-
|
| 145 |
-
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
|
| 146 |
-
|
| 147 |
-
The dataset does not contain personal, sensitive, or private information.
|
| 148 |
-
|
| 149 |
## Bias, Risks, and Limitations
|
| 150 |
|
| 151 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
|
@@ -158,32 +165,10 @@ The dataset may have biases due to the specific ecological context of Doñana Na
|
|
| 158 |
|
| 159 |
Users should be aware of the ecological context and potential biases when using the dataset. They should also consider the class imbalance and the challenges related to environmental noise.
|
| 160 |
|
| 161 |
-
<!--
|
| 162 |
-
## Citation [optional]
|
| 163 |
-
|
| 164 |
-
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
| 165 |
-
|
| 166 |
-
<!--
|
| 167 |
-
**BibTeX:**
|
| 168 |
-
|
| 169 |
-
[More Information Needed]
|
| 170 |
-
|
| 171 |
-
**APA:**
|
| 172 |
-
|
| 173 |
-
[More Information Needed]
|
| 174 |
-
-->
|
| 175 |
-
|
| 176 |
-
<!--
|
| 177 |
-
## Glossary [optional]
|
| 178 |
-
|
| 179 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
|
| 180 |
-
|
| 181 |
-
<!--
|
| 182 |
-
[More Information Needed]-->
|
| 183 |
-
|
| 184 |
## More Information
|
| 185 |
|
| 186 |
This dataset incorporates synthetic background audio, which has been created by introducing noise and modifying the original audio intensities. This process, known as Data Augmentation, enhances the robustness of the dataset. Additionally, a subset of the ESC-50 dataset, which is a widely recognized benchmark for environmental sound classification, has also been included to enrich the diversity of the dataset.
|
|
|
|
| 187 |
|
| 188 |
## Dataset Card Authors
|
| 189 |
|
|
|
|
| 50 |
|
| 51 |
<!-- Provide the basic links for the dataset. -->
|
| 52 |
|
| 53 |
+
- **Code Repository:** [BIRDeep Neural Networks](https://github.com/GrunCrow/BIRDeep_NeuralNetworks)
|
| 54 |
- **Paper:** Decoding the Sounds of Doñana: Advancements in Bird Detection and Identification Through Deep Learning
|
| 55 |
|
| 56 |
## Uses
|
|
|
|
| 63 |
|
| 64 |
The dataset is intended for use in training and evaluating models for bird vocalization detection and identification. It can be used to automate the annotation of these recordings, facilitating relevant ecological studies.
|
| 65 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
## Dataset Structure
|
| 67 |
|
| 68 |
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
|
|
|
| 73 |
|
| 74 |

|
| 75 |
|
| 76 |
+
## Data Files Description
|
| 77 |
+
|
| 78 |
+
There are 3 `.CSV` files that contain all the metadata related to each split of the dataset (train, validation, and test). Each of these `.CSV` files includes the following information. Each row represents one annotation (an annotated bird song). There might be more than one row per audio.
|
| 79 |
+
|
| 80 |
+
- **path**: Relative path from the `Audio` folder to the corresponding audio. For images, change the file format to `.PNG` and use the `images` folder instead of the `Audios` folder.
|
| 81 |
+
- **annotator**: Expert ornithologist who annotated the detection.
|
| 82 |
+
- **recorder**: Code of the recorder; see below for the mapping of recorder, location, and coordinates.
|
| 83 |
+
- **date**: Date of the recording.
|
| 84 |
+
- **time**: Time of the recording.
|
| 85 |
+
- **audio_duration**: Duration of the audio (all are 1-minute audios).
|
| 86 |
+
- **start_time**: Start time of the annotated bird song relative to the full duration of the audio.
|
| 87 |
+
- **end_time**: End time of the annotated bird song relative to the full duration of the audio.
|
| 88 |
+
- **low_frequency**: Lower frequency of the annotated bird song.
|
| 89 |
+
- **high_frequency**: Higher frequency of the annotated bird song.
|
| 90 |
+
- **specie**: Species to which the annotation belongs.
|
| 91 |
+
- **bbox**: Bounding box coordinates in the image (YOLOv8 format).
|
| 92 |
+
|
| 93 |
+
Each annotation has been adapted to the YOLOv8 required format, which follows the same folder structure as the image folder (which is the same as the `Audio` folder) for a labels folder. It contains a `.TXT` file for each image with one row per annotation, including the species and bounding box.
|
| 94 |
+
|
| 95 |
## Dataset Creation
|
| 96 |
|
| 97 |
### Curation Rationale
|
| 98 |
|
| 99 |
<!-- Motivation for the creation of this dataset. -->
|
| 100 |
|
| 101 |
+
The dataset was created to improve the accuracy and efficiency of bird species identification using deep learning models for our study case (Doñana National Park). It addresses the challenge of managing large datasets of acoustic recordings for identifying species of interest in ecoacoustics studies.
|
| 102 |
|
| 103 |
### Source Data
|
| 104 |
|
|
|
|
| 108 |
|
| 109 |
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
| 110 |
|
| 111 |
+
Audio recordings were collected from three main habitats across nine different locations within Doñana National Park using automatic audio recorders (AudioMoths). Approximately 500 minutes of audio data were annotated, prioritizing times when birds are most active to capture as many songs as possible, specifically from a few hours before dawn until midday.
|
| 112 |
|
| 113 |
The distribution of the recorders is as follows:
|
| 114 |
|
| 115 |

|
| 116 |
|
| 117 |
+
The names of the places correspond to the following recorders and coordinates:
|
| 118 |
|
| 119 |
| Number | Habitat | Place Name | Recorder | Lat | Lon | Installation Date |
|
| 120 |
|--------|------------|-------------------|----------|------------|--------------|-------------------|
|
|
|
|
| 128 |
| Site 8 | marshland | Cancela Millán | AM15 | 37.0563889 | -6.6025 | 03/02/2023 |
|
| 129 |
| Site 9 | marshland | Juncabalejo | AM16 | 36.9361111 | -6.378333333 | 03/02/2023 |
|
| 130 |
|
| 131 |
+
All recording times and datasets are in UTC format.
|
| 132 |
|
| 133 |
|
| 134 |
#### Data producers
|
|
|
|
| 153 |
- Eduardo Santamaría García
|
| 154 |
- Giulia Bastianelli
|
| 155 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 156 |
## Bias, Risks, and Limitations
|
| 157 |
|
| 158 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
|
|
|
| 165 |
|
| 166 |
Users should be aware of the ecological context and potential biases when using the dataset. They should also consider the class imbalance and the challenges related to environmental noise.
|
| 167 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 168 |
## More Information
|
| 169 |
|
| 170 |
This dataset incorporates synthetic background audio, which has been created by introducing noise and modifying the original audio intensities. This process, known as Data Augmentation, enhances the robustness of the dataset. Additionally, a subset of the ESC-50 dataset, which is a widely recognized benchmark for environmental sound classification, has also been included to enrich the diversity of the dataset.
|
| 171 |
+
It can be excluded of the dataset as it is in separate folders into de audios, images and labels root folder (Data Augmentation folder and ESC50), also, annotations have to been removed from the CSVs in any case they will be used to process the dataset.
|
| 172 |
|
| 173 |
## Dataset Card Authors
|
| 174 |
|