switchboard / README.md
hhoangphuoc's picture
Update README.md
974526a verified
metadata
dataset_info:
  features:
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: sampling_rate
      dtype: int64
    - name: transcript
      dtype: string
  splits:
    - name: train
      num_bytes: 26537763371.78
      num_examples: 185402
    - name: validation
      num_bytes: 2948998696.305
      num_examples: 20601
    - name: test
      num_bytes: 7390220553.37
      num_examples: 51501
  download_size: 29378895903
  dataset_size: 36876982621.455
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
task_categories:
  - automatic-speech-recognition
tags:
  - paralinguistic
  - laughter
pretty_name: switchboard-speechlaugh
size_categories:
  - 100K<n<1M

Corpus Overview

A preprocessed version of Switchboard Corpus. The corpus audio has been upsampled to 16kHz, separated channels and the transcripts have been processed with special treats for paralinguistic events, particularly laughter and speech-laughs. This preprocessed dataset has been processed for ASR task. For the original dataset, please check out the original link: https://catalog.ldc.upenn.edu/LDC97S62 for contributed original authors.

To download the original dataset, it can be found at:

https://drive.google.com/drive/folders/1YhpWgzCwc4cVhYJPcjuLWy84s-L0hJbf

or using gdown:

gdown 1YhpWgzCwc4cVhYJPcjuLWy84s-L0hJbf -O /path/to/dataset/switchboard

Corpus Structure

The dataset has been splitted into train, test and validation sets with 70/20/10 ratio, as following summary:

Train Dataset (70%): Dataset({
    features: ['audio', 'sampling_rate', 'transcript'],
    num_rows: 185402
})
Validation Dataset (10%): Dataset({
    features: ['audio', 'sampling_rate', 'transcript'],
    num_rows: 20601
})
Test Dataset (20%): Dataset({
    features: ['audio', 'sampling_rate', 'transcript'],
    num_rows: 51501
})

An example of the content is this dataset:


Specifications

Regarding the total amount of laughter and speech-laugh existing in the dataset, which using for specific task for Laughter and Speech-laugh Recognition, here is the additional overview:

Train Dataset (swb_train): {'laughter': 16044, 'speechlaugh': 9586} 

Validation Dataset (swb_val): {'laughter': 1845, 'speechlaugh': 1133} 

Test Dataset (swb_test): {'laughter': 4335, 'speechlaugh': 2775}