GrunCrow commited on
Commit
65e8583
·
verified ·
1 Parent(s): 3ac1fd6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +146 -1
README.md CHANGED
@@ -11,4 +11,149 @@ tags:
11
  pretty_name: BIRDeep_AudioAnnotations
12
  size_categories:
13
  - n<1K
14
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  pretty_name: BIRDeep_AudioAnnotations
12
  size_categories:
13
  - n<1K
14
+ ---
15
+
16
+ # BIRDeep Audio Annotations
17
+
18
+ <!-- Provide a quick summary of the dataset. -->
19
+
20
+ The BIRDeep Audio Annotations dataset is a collection of bird vocalizations from Doñana National Park, Spain. It was created as part of the BIRDeep project, which aims to optimize the detection and classification of bird species in audio recordings using deep learning techniques. The dataset is intended for use in training and evaluating models for bird vocalization detection and identification.
21
+
22
+ ## Dataset Details
23
+
24
+ ### Dataset Description
25
+
26
+ <!-- Provide a longer summary of what this dataset is. -->
27
+
28
+ - **Curated by:** Estación Biológica de Doñana (CSIC) and Universidad de Córdoba
29
+ - **Funded by:** BIRDeep project (TED2021-129871A-I00), which is funded by MICIU/AEI/10.13039/501100011033 and the 'European Union NextGenerationEU/PRTR', as well as grants PID2020-115129RJ-I00 from MCIN/AEI/10.13039/501100011033.
30
+ - **Shared by:** BIRDeep Project
31
+ - **Language(s):** English
32
+ - **License:** MIT
33
+
34
+ ### Dataset Sources [optional]
35
+
36
+ <!-- Provide the basic links for the dataset. -->
37
+
38
+ - **Repository:** (BIRDeep Neural Networks)[https://github.com/GrunCrow/BIRDeep_NeuralNetworks]
39
+ - **Paper:** Decoding the Sounds of Doñana: Advancements in Bird Detection and Identification Through Deep Learning
40
+
41
+ ## Uses
42
+
43
+ <!-- Address questions around how the dataset is intended to be used. -->
44
+
45
+ ### Direct Use
46
+
47
+ <!-- This section describes suitable use cases for the dataset. -->
48
+
49
+ The dataset is intended for use in training and evaluating models for bird vocalization detection and identification. It can be used to automate the annotation of these recordings, facilitating relevant ecological studies.
50
+
51
+ ### Out-of-Scope Use
52
+
53
+ <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
54
+
55
+ The dataset should not be used for purposes unrelated to bird vocalization detection and identification.
56
+
57
+ ## Dataset Structure
58
+
59
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
60
+
61
+ The dataset includes audio data categorized into 38 different classes, representing a variety of bird species found in the park. The data was collected from three main habitats across nine different locations within Doñana National Park, providing a diverse range of bird vocalizations.
62
+
63
+ ## Dataset Creation
64
+
65
+ ### Curation Rationale
66
+
67
+ <!-- Motivation for the creation of this dataset. -->
68
+
69
+ The dataset was created to improve the accuracy and efficiency of bird species identification using deep learning models. It addresses the challenge of managing large datasets of acoustic recordings for identifying species of interest in ecoacoustics studies.
70
+
71
+ ### Source Data
72
+
73
+ <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
74
+
75
+ #### Data Collection and Processing
76
+
77
+ <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
78
+
79
+ Audio recordings were collected from three main habitats across nine different locations within Doñana National Park using automatic audio recorders (AudioMoths). Approximately 500 minutes of audio data were annotated, prioritizing times when birds have greater activity to have as many audios with songs as possible, specifically a few hours before dawn until midday.
80
+
81
+ #### Who are the source data producers?
82
+
83
+ <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
84
+
85
+ The data was produced by researchers from Estación Biológica de Doñana and Universidad de Córdoba.
86
+
87
+ ### Annotations
88
+
89
+ <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
90
+
91
+ #### Annotation process
92
+
93
+ <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
94
+
95
+ Annotations were made manually by experts, resulting in 3749 annotations representing 38 different classes. In addition to the species-specific classes, other general classes were distinguished: Genus (when the species was unknown but the genus of the species was distinguished), a general Bird class, and a No Audio class for recordings that contain only soundscape without bird songs. As the Bird Song Detector only has two classes, labels were reclassified as Bird or No bird for recordings that include only soundscape background without biotic sound or whether biotic sounds were non-avian.
96
+
97
+ #### Who are the annotators?
98
+
99
+ <!-- This section describes the people or systems who created the annotations. -->
100
+ - E. Santamaría García
101
+ - G. Bastianelli
102
+
103
+ #### Personal and Sensitive Information
104
+
105
+ <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
106
+
107
+ The dataset does not contain personal, sensitive, or private information.
108
+
109
+ ## Bias, Risks, and Limitations
110
+
111
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
112
+
113
+ The dataset may have biases due to the specific ecological context of Doñana National Park and the focus on bird vocalizations. It also exhibits class imbalance, with varying frequencies of annotations across different bird species classes. Additionally, the dataset contains inherent challenges related to environmental noise.
114
+
115
+ ### Recommendations
116
+
117
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
118
+
119
+ Users should be aware of the ecological context and potential biases when using the dataset. They should also consider the class imbalance and the challenges related to environmental noise.
120
+
121
+ <!--
122
+ ## Citation [optional]
123
+
124
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
125
+
126
+ <!--
127
+ **BibTeX:**
128
+
129
+ [More Information Needed]
130
+
131
+ **APA:**
132
+
133
+ [More Information Needed]
134
+ -->
135
+
136
+ <!--
137
+ ## Glossary [optional]
138
+
139
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
140
+
141
+ <!--
142
+ [More Information Needed]-->
143
+
144
+ ## More Information
145
+
146
+ This dataset incorporates synthetic background audio, which has been created by introducing noise and modifying the original audio intensities. This process, known as Data Augmentation, enhances the robustness of the dataset. Additionally, a subset of the ESC-50 dataset, which is a widely recognized benchmark for environmental sound classification, has also been included to enrich the diversity of the dataset.
147
+
148
+ ## Dataset Card Authors
149
+
150
+ - Alba Márquez-Rodríguez
151
+ - Miguel Ángel Muñoz Mohedano
152
+ - Manuel Jesús Marín
153
+ - E. Santamaría García
154
+ - G. Bastianelli
155
+ - I. Sagrera
156
+
157
+ ## Dataset Card Contact
158
+
159
+ Alba Márquez-Rodríguez - [email protected]