You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Streaming ASR Dataset

This dataset is designed for training real-time (streaming) ASR models, with a focus on handling chunk-based audio processing. It contains standardized audio segments from LibriSpeech dev-clean, processed for streaming ASR applications.

Dataset Description

Dataset Summary

  • Source: LibriSpeech dev-clean
  • Total chunks: 2,703
  • Total duration: ~20 hours (1,212.26 seconds)
  • Unique speakers: 40
  • Audio format: 16 kHz mono WAV
  • Language: English
  • Domain: Audiobooks (clean speech)

Dataset Structure

openwhisper/
├── chunks/          # Audio files (16kHz mono WAV)
├── transcripts/     # Text transcriptions
└── metadata/        # JSON files with detailed information

Data Fields

Each sample consists of:

  1. Audio file (WAV)

    • 16 kHz sampling rate
    • Mono channel
    • 16-bit PCM format
  2. Transcript file (TXT)

    • Clean text transcription
    • Includes punctuation and casing
    • Aligned with audio chunks
  3. Metadata file (JSON)

    • speaker_id: Unique speaker identifier
    • chunk_id: Unique chunk identifier
    • start_time: Start time in original audio
    • end_time: End time in original audio
    • duration: Chunk duration in seconds
    • language: Language code (en)
    • noise_conditions: Audio quality label (clean)
    • original_file: Source file reference

Data Splits

This dataset contains only the dev-clean portion of LibriSpeech, processed into overlapping chunks suitable for streaming ASR training.

Dataset Creation

Preprocessing

  1. Audio standardization

    • Resampling to 16 kHz
    • Conversion to mono channel
    • Format conversion to WAV
  2. Chunking strategy

    • Fixed chunk duration with overlap
    • Natural pause boundary detection
    • Consistent chunk size for training stability
  3. Transcript processing

    • Alignment with audio chunks
    • Preservation of punctuation and casing
    • Clean text normalization

Usage

Loading the Dataset

from datasets import load_dataset

dataset = load_dataset("orgho98/openwhisper")

Training Example

# Example code for loading audio and transcript pairs
for sample in dataset:
    audio = sample['audio']
    transcript = sample['text']
    metadata = sample['metadata']
    
    # Process for streaming ASR training
    # ...

License

This dataset is released under the MIT License, following LibriSpeech's licensing terms.

Citation

If you use this dataset, please cite:

@misc{openwhisper2024,
  title={Streaming ASR Dataset},
  author={Automagically AI},
  year={2024},
  publisher={Hugging Face},
  howpublished={\url{https://huggingface.co/datasets/orgho98/openwhisper}}
}

Limitations

  • Limited to clean speech from audiobooks
  • Single language (English)
  • May not represent real-world streaming conditions perfectly

Additional Information

  • Curated by: Automagically AI
  • License: MIT
  • Version: 1.0.0
Downloads last month
14