metadata
dataset_info:
features:
- name: source
dtype: string
- name: filename
dtype: string
- name: order_index
dtype: string
- name: link
dtype: string
- name: transcript_whisper
dtype: string
- name: audio
dtype: audio
- name: c50
dtype: float32
- name: snr
dtype: float32
- name: speech_duration
dtype: float32
- name: emotion_emotion2vec
dtype: string
- name: transcript_sensevoice
dtype: string
- name: emotion_sensevoice
sequence: string
- name: event_sensevoice
sequence: string
splits:
- name: train
num_bytes: 507480914420.836
num_examples: 2229346
download_size: 589102038968
dataset_size: 507480914420.836
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- automatic-speech-recognition
- audio-classification
language:
- zh
- yue
Cantonese Radio Pseudo-Transcription Dataset
- Contains 14k hours of audio sourced from Archive.org
- Columns
order_index
: Represents the order of the audio compared to those from the samefilename
link
: Link of the original full audiotranscript_whisper
: Transcribed usingScrya/whisper-large-v2-cantonese
withalvanlii/whisper-small-cantonese
for speculative decodingtranscript_sensevoice
: Transcribed usingFunAudioLLM/SenseVoiceSmall
- used OpenCC to convert to traditional chinese
- isolated event tags to
event_sensevoice
- isolated emotion tags to
emotion_sensevoice
snr
: Signal-to-noise ratio, extracted fromylacombe/brouhaha-best
c50
: Speech clarity, extracted fromylacombe/brouhaha-best
emotion
: Emotion, extracted fromemotion2vec/emotion2vec_plus_large
- Note that
id
does not reflect the ordering of the audio within the same video
- Processing
- The full audio is split using WhisperX, using
Scrya/whisper-large-v2-cantonese
- it is split in <30s chunks and according to speakers
- No filtering or additional audio processing was done for this dataset
- Filtering is recommended for your own use
- The full audio is split using WhisperX, using