|
--- |
|
license: cc-by-nc-4.0 |
|
task_categories: |
|
- audio-classification |
|
language: |
|
- en |
|
- fr |
|
- de |
|
- pl |
|
- es |
|
- it |
|
tags: |
|
- ReplayDF |
|
- Audio-Deepfake |
|
- Replay-Attack |
|
- Spoof |
|
- Replay |
|
pretty_name: ReplayDF |
|
size_categories: |
|
- 100K<n<1M |
|
--- |
|
|
|
 |
|
 |
|
|
|
# ReplayDF |
|
|
|
**ReplayDF** is a dataset for evaluating the impact of **replay attacks** on audio deepfake detection systems. |
|
It features re-recorded bona-fide and synthetic speech derived from [M-AILABS](https://github.com/imdatceleste/m-ailabs-dataset) and [MLAAD v5](https://deepfake-total.com/mlaad), using **109 unique speaker-microphone combinations** across six languages and four TTS models in diverse acoustic environments. |
|
|
|
This dataset reveals how such replays can significantly degrade the performance of state-of-the-art detectors. |
|
That is, audio deepfakes are detected much worse once they have been played over a loudspeaker and re-recorded via a microphone. |
|
It is provided for **non-commercial research** to support the development of **robust and generalizable** deepfake detection systems. |
|
|
|
## 📄 Paper |
|
[Replay Attacks Against Audio Deepfake Detection (Interspeech 2025)](https://arxiv.org/pdf/2505.14862) |
|
|
|
## 🔽 Download |
|
```bash |
|
sudo apt-get install git-lfs |
|
git lfs install |
|
git clone https://huggingface.co/datasets/mueller91/ReplayDF |
|
``` |
|
|
|
# 📌 Citation |
|
``` |
|
@article{muller2025replaydf, |
|
title = {Replay Attacks Against Audio Deepfake Detection}, |
|
author = {Nicolas Müller and Piotr Kawa and Wei-Herng Choong and Adriana Stan and Aditya Tirumala Bukkapatnam and Karla Pizzi and Alexander Wagner and Philip Sperl}, |
|
journal={Interspeech 2025}, |
|
year = {2025}, |
|
} |
|
``` |
|
|
|
# 📁 Folder Structure |
|
``` |
|
ReplayDF/ |
|
├── aux/ |
|
│ ├── <UID1>/ # contains information such as setup, recorded sine sweep, RIR (derived from sine sweep) |
|
│ ├── <UID2>/ |
|
│ └── ... |
|
├── wav/ |
|
│ ├── <UID1>/ |
|
│ │ ├── spoof # Re-recorded audio samples (spoofs) |
|
│ │ ├── benign # Re-recorded audio samples (benign) |
|
│ │ └── meta.csv # Metadata for this UID's recordings |
|
│ ├── <UID2>/ |
|
│ │ ├── spoof |
|
│ │ ├── benign |
|
│ │ └── meta.csv |
|
│ └── ... |
|
├── mos/ |
|
│ └── mos.png # MOS ratings plot |
|
│ └── mos_scores # individual mos scores |
|
``` |
|
|
|
# 📄 License |
|
Attribution-NonCommercial-ShareAlike 4.0 International: https://creativecommons.org/licenses/by-nc/4.0/ |
|
|
|
|
|
# Resources |
|
Find the original resources (i.e. non-airgapped audio files) here: |
|
- MLAAD dataset v5, https://deepfake-total.com/mlaad. |
|
- M-AILABS dataset, https://github.com/imdatceleste/m-ailabs-dataset. |
|
|
|
# Mic/Speaker Matrix |
|
 |
|
|
|
# 📊 Mean Opinion Scores (MOS) |
|
|
|
The scoring criteria for rating the audio files are outlined in the table below: |
|
|
|
|
|
| Rating | Description | Speech Quality | Distortion (background noise, overdrive, etc.)| |
|
|-----------|---------------|---------------------------|-----------------------------------------------| |
|
| 5 | Excellent | Clear | Imperceptible | |
|
| 4 | Good | Clear | Slightly perceptible, but not annoying | |
|
| 3 | Fair | Understandable | Perceptible and slightly annoying | |
|
| 2 | Poor | Understandable | Perceptible and annoying | |
|
| 1 | Very Poor | Barely understandable | Very annoying and objectionable | |
|
| e | Error | Inaudible | Heavy | |
|
|
|
|