Speaker overlap across train/validation/test splits in ASR subsets?
Hello, and thank you for releasing the WaxalNLP dataset, this is a very valuable resource.
I am currently working with the ASR subsets (ach_asr) and wanted to clarify something regarding the dataset splits.
For each language individually, I:
- Loaded the
train,validation, andtestsplits. - Extracted the
speaker_idfield from each split. - Constructed sets of unique speaker IDs per split.
- Computed intersections between the splits to check for speaker overlap.
For example (ach_asr yields the results below):
- Train speakers: 322
- Validation speakers: 199
- Test speakers: 194
Overlap counts:
- Train ∩ Validation: 194
- Train ∩ Test: 192
- Validation ∩ Test: 150
Based on this, it appears that many speakers occur in multiple splits.
I wanted to ask:
- Is speaker overlap across splits intentional?
- Were the splits designed at the utterance level rather than the speaker level?
- For speaker-independent ASR evaluation, would it be appropriate to re-split the data by
speaker_id?
I just want to make sure I understand the intended usage and evaluation protocol before proceeding.
Thank you again for making this dataset available.
EDIT: You can check out the script I am working with here: https://colab.research.google.com/drive/1Zpvjl3DBBEi22HTqbUcaEZWvcA0KmJRk?usp=sharing
Hi Olaolu,
Thanks for trying out the dataset! Regarding your questions:
- We did not segment the splits by speaker. Speech was elicited using images so we tried to have each set contain unique speech since people might use similar words for a given image.
- Splits were at the topic level. Ideally, each split should have speakers discussing varied topics
- It would be interesting to train a model that evaluates against an unseen test set of speakers. Please share your results once you have them!
All the best with your research!
Perry