|
|
--- |
|
|
license: mit |
|
|
task_categories: |
|
|
- audio-classification |
|
|
- audio-to-audio |
|
|
tags: |
|
|
- audio |
|
|
- music |
|
|
- source-separation |
|
|
- musdb18 |
|
|
- stems |
|
|
- active-segments |
|
|
- cs229 |
|
|
- stanford |
|
|
pretty_name: "MUSDB18 Active Stems - CS229 Project" |
|
|
size_categories: |
|
|
- 10K<n<100K |
|
|
--- |
|
|
|
|
|
# MUSDB18 Active Stems Dataset - CS229 Project |
|
|
|
|
|
This dataset contains active stem segments extracted from the MUSDB18 dataset for the Stanford CS229 Machine Learning course project on audio source separation. |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
This is a processed version of the MUSDB18 dataset containing only the active segments of each stem (drums, bass, vocals, accompaniment, and mixture), designed to improve training efficiency for music source separation models. |
|
|
|
|
|
## Key Features |
|
|
|
|
|
- **Active Segment Detection**: Only segments where stems have significant energy |
|
|
- **5 Stems**: mixture, drums, bass, vocals, accompaniment |
|
|
- **Consistent Format**: 22.05 kHz sample rate, mono audio |
|
|
- **Rich Metadata**: Detailed segment information and statistics |
|
|
|
|
|
## CS229 Project Context |
|
|
|
|
|
This dataset was created as part of a Stanford CS229 course project focusing on: |
|
|
- Music source separation using deep learning |
|
|
- Comparison of different neural architectures (Conv-TasNet, etc.) |
|
|
- Analysis of active vs. inactive audio segments in training |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
``` |
|
|
extracted_stems/ |
|
|
├── train/ # Training split |
|
|
│ ├── drums/ # Active drum segments |
|
|
│ ├── bass/ # Active bass segments |
|
|
│ ├── vocals/ # Active vocal segments |
|
|
│ ├── accompaniment/ # Active accompaniment segments |
|
|
│ └── mixture/ # Active mixture segments |
|
|
├── test/ # Test split (same structure) |
|
|
└── metadata/ # JSON metadata files |
|
|
``` |
|
|
|
|
|
## Quick Start |
|
|
|
|
|
### Loading with datasets library |
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load the full dataset |
|
|
dataset = load_dataset("cs229-audio-ml-project/musdb18-processed") |
|
|
|
|
|
# Access training data |
|
|
train_data = dataset["train"] |
|
|
for item in train_data: |
|
|
audio = item["audio"]["array"] |
|
|
stem_type = item["stem_type"] |
|
|
track_name = item["track_name"] |
|
|
``` |
|
|
|
|
|
### Manual loading |
|
|
```python |
|
|
import soundfile as sf |
|
|
import json |
|
|
|
|
|
# Load an audio segment |
|
|
audio, sr = sf.read("train/vocals/track_vocals_001.wav") |
|
|
|
|
|
# Load metadata |
|
|
with open("metadata/train_metadata.json") as f: |
|
|
metadata = json.load(f) |
|
|
``` |
|
|
|
|
|
## Extraction Parameters |
|
|
|
|
|
- **Segment Length**: 4.0 seconds |
|
|
- **Hop Length**: 2.0 seconds (50% overlap) |
|
|
- **Energy Threshold**: 0.01 RMS |
|
|
- **Sample Rate**: 22,050 Hz |
|
|
- **Minimum Duration**: 1.0 seconds |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use this dataset in your research, please cite: |
|
|
|
|
|
```bibtex |
|
|
@dataset{cs229_musdb18_active_stems, |
|
|
title={MUSDB18 Active Stems Dataset}, |
|
|
author={CS229 Audio ML Project Team}, |
|
|
year={2025}, |
|
|
publisher={Hugging Face}, |
|
|
url={https://huggingface.co/datasets/cs229-audio-ml-project/musdb18-processed} |
|
|
} |
|
|
``` |
|
|
|
|
|
Original MUSDB18 citation: |
|
|
```bibtex |
|
|
@misc{musdb18, |
|
|
author = {Rafii, Zafar and Liutkus, Antoine and Stöter, Fabian-Robert and Mimilakis, Stylianos Ioannis and Bittner, Rachel}, |
|
|
title = {MUSDB18-HQ - an uncompressed version of MUSDB18}, |
|
|
month = {December}, |
|
|
year = {2019}, |
|
|
doi = {10.5281/zenodo.3338373}, |
|
|
url = {https://doi.org/10.5281/zenodo.3338373} |
|
|
} |
|
|
``` |
|
|
|
|
|
## Contact |
|
|
|
|
|
For questions about this dataset or the CS229 project, please open an issue in this repository. |
|
|
|
|
|
Created for Stanford CS229 - Machine Learning Course Project |
|
|
|