--- license: mit task_categories: - audio-classification language: - en - es tags: - biology - synthetic pretty_name: BIRDeep_AudioAnnotations size_categories: - n<1K --- # BIRDeep Audio Annotations The BIRDeep Audio Annotations dataset is a collection of bird vocalizations from Doñana National Park, Spain. It was created as part of the BIRDeep project, which aims to optimize the detection and classification of bird species in audio recordings using deep learning techniques. The dataset is intended for use in training and evaluating models for bird vocalization detection and identification. ## Dataset Details ### Dataset Description - **Curated by:** Estación Biológica de Doñana (CSIC) and Universidad de Córdoba - **Funded by:** BIRDeep project (TED2021-129871A-I00), which is funded by MICIU/AEI/10.13039/501100011033 and the 'European Union NextGenerationEU/PRTR', as well as grants PID2020-115129RJ-I00 from MCIN/AEI/10.13039/501100011033. - **Shared by:** BIRDeep Project - **Language(s):** English - **License:** MIT ### Dataset Sources [optional] - **Repository:** (BIRDeep Neural Networks)[https://github.com/GrunCrow/BIRDeep_NeuralNetworks] - **Paper:** Decoding the Sounds of Doñana: Advancements in Bird Detection and Identification Through Deep Learning ## Uses ### Direct Use The dataset is intended for use in training and evaluating models for bird vocalization detection and identification. It can be used to automate the annotation of these recordings, facilitating relevant ecological studies. ### Out-of-Scope Use The dataset should not be used for purposes unrelated to bird vocalization detection and identification. ## Dataset Structure The dataset includes audio data categorized into 38 different classes, representing a variety of bird species found in the park. The data was collected from three main habitats across nine different locations within Doñana National Park, providing a diverse range of bird vocalizations. ## Dataset Creation ### Curation Rationale The dataset was created to improve the accuracy and efficiency of bird species identification using deep learning models. It addresses the challenge of managing large datasets of acoustic recordings for identifying species of interest in ecoacoustics studies. ### Source Data #### Data Collection and Processing Audio recordings were collected from three main habitats across nine different locations within Doñana National Park using automatic audio recorders (AudioMoths). Approximately 500 minutes of audio data were annotated, prioritizing times when birds have greater activity to have as many audios with songs as possible, specifically a few hours before dawn until midday. #### Who are the source data producers? The data was produced by researchers from Estación Biológica de Doñana and Universidad de Córdoba. ### Annotations #### Annotation process Annotations were made manually by experts, resulting in 3749 annotations representing 38 different classes. In addition to the species-specific classes, other general classes were distinguished: Genus (when the species was unknown but the genus of the species was distinguished), a general Bird class, and a No Audio class for recordings that contain only soundscape without bird songs. As the Bird Song Detector only has two classes, labels were reclassified as Bird or No bird for recordings that include only soundscape background without biotic sound or whether biotic sounds were non-avian. #### Who are the annotators? - E. Santamaría García - G. Bastianelli #### Personal and Sensitive Information The dataset does not contain personal, sensitive, or private information. ## Bias, Risks, and Limitations The dataset may have biases due to the specific ecological context of Doñana National Park and the focus on bird vocalizations. It also exhibits class imbalance, with varying frequencies of annotations across different bird species classes. Additionally, the dataset contains inherent challenges related to environmental noise. ### Recommendations Users should be aware of the ecological context and potential biases when using the dataset. They should also consider the class imbalance and the challenges related to environmental noise. ## More Information This dataset incorporates synthetic background audio, which has been created by introducing noise and modifying the original audio intensities. This process, known as Data Augmentation, enhances the robustness of the dataset. Additionally, a subset of the ESC-50 dataset, which is a widely recognized benchmark for environmental sound classification, has also been included to enrich the diversity of the dataset. ## Dataset Card Authors - Alba Márquez-Rodríguez - Miguel Ángel Muñoz Mohedano - Manuel Jesús Marín - E. Santamaría García - G. Bastianelli - I. Sagrera ## Dataset Card Contact Alba Márquez-Rodríguez - ai.gruncrow@gmai.com