Whisper basque fine-tuning
Collection
This collections contains Whisper fine-tuned models on basque speech datasets. More info: https://xezpeleta.github.io/whisper-euskaraz/
•
11 items
•
Updated
This model is a fine-tuned version of openai/whisper-tiny on the asierhv/composite_corpus_eu_v2.1 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
0.586 | 0.1 | 1000 | 0.6249 | 34.1639 |
0.3145 | 0.2 | 2000 | 0.5048 | 25.2591 |
0.225 | 0.3 | 3000 | 0.4839 | 22.0557 |
0.3003 | 0.4 | 4000 | 0.4540 | 20.3072 |
0.132 | 0.5 | 5000 | 0.4574 | 19.0146 |
0.1588 | 0.6 | 6000 | 0.4380 | 17.8219 |
0.1841 | 0.7 | 7000 | 0.4395 | 16.6667 |
0.143 | 0.8 | 8000 | 0.3719 | 15.4490 |
0.0967 | 0.9 | 9000 | 0.3685 | 15.1368 |
0.1059 | 1.0 | 10000 | 0.3719 | 14.8495 |