whisper-tiny
This model is a fine-tuned version of openai/whisper-tiny on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set:
- Loss: 0.5900
- Wer Ortho: 0.3103
- Wer: 0.3103
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 500
- training_steps: 500
Training results
Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
---|---|---|---|---|---|
1.7824 | 1.7857 | 50 | 1.0732 | 0.4565 | 0.4565 |
0.3528 | 3.5714 | 100 | 0.4932 | 0.3745 | 0.3745 |
0.1313 | 5.3571 | 150 | 0.5215 | 0.3430 | 0.3430 |
0.035 | 7.1429 | 200 | 0.5468 | 0.3387 | 0.3387 |
0.0103 | 8.9286 | 250 | 0.5900 | 0.3103 | 0.3103 |
0.0085 | 10.7143 | 300 | 0.6345 | 0.3307 | 0.3307 |
0.009 | 12.5 | 350 | 0.6771 | 0.3418 | 0.3418 |
0.0137 | 14.2857 | 400 | 0.6456 | 0.3374 | 0.3374 |
0.0138 | 16.0714 | 450 | 0.6171 | 0.3294 | 0.3294 |
0.0151 | 17.8571 | 500 | 0.7379 | 0.4312 | 0.4312 |
Framework versions
- Transformers 4.49.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 14
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for Gwenn-LR/whisper-tiny
Base model
openai/whisper-tiny