Speaker overlap across train/validation/test splits in ASR subsets?

#14
by olaolugbenle - opened

Hello, and thank you for releasing the WaxalNLP dataset, this is a very valuable resource.

I am currently working with the ASR subsets (ach_asr) and wanted to clarify something regarding the dataset splits.

For each language individually, I:

  1. Loaded the train, validation, and test splits.
  2. Extracted the speaker_id field from each split.
  3. Constructed sets of unique speaker IDs per split.
  4. Computed intersections between the splits to check for speaker overlap.

For example (ach_asr yields the results below):

  • Train speakers: 322
  • Validation speakers: 199
  • Test speakers: 194

Overlap counts:

  • Train ∩ Validation: 194
  • Train ∩ Test: 192
  • Validation ∩ Test: 150

Based on this, it appears that many speakers occur in multiple splits.

I wanted to ask:

  • Is speaker overlap across splits intentional?
  • Were the splits designed at the utterance level rather than the speaker level?
  • For speaker-independent ASR evaluation, would it be appropriate to re-split the data by speaker_id?

I just want to make sure I understand the intended usage and evaluation protocol before proceeding.

Thank you again for making this dataset available.

EDIT: You can check out the script I am working with here: https://colab.research.google.com/drive/1Zpvjl3DBBEi22HTqbUcaEZWvcA0KmJRk?usp=sharing

Hi Olaolu,

Thanks for trying out the dataset! Regarding your questions:

  1. We did not segment the splits by speaker. Speech was elicited using images so we tried to have each set contain unique speech since people might use similar words for a given image.
  2. Splits were at the topic level. Ideally, each split should have speakers discussing varied topics
  3. It would be interesting to train a model that evaluates against an unseen test set of speakers. Please share your results once you have them!

All the best with your research!

Perry

perrynelson changed discussion status to closed

Sign up or log in to comment