Whisper Small Hi - Sanchit Gandhi

This model is a fine-tuned version of nurzhanit/whisper-enhanced-ml on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0003
  • Wer: 22.3549

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • training_steps: 500

Training results

Training Loss Epoch Step Validation Loss Wer
0.0035 10.0 50 0.0035 22.4403
0.0088 20.0 100 0.0033 22.3976
0.0022 30.0 150 0.0013 22.3549
0.0007 40.0 200 0.0006 22.3549
0.0005 50.0 250 0.0004 22.3549
0.0004 60.0 300 0.0004 22.3549
0.0003 70.0 350 0.0003 22.3549
0.0003 80.0 400 0.0003 22.3549
0.0003 90.0 450 0.0003 22.3549
0.0003 100.0 500 0.0003 22.3549

Framework versions

  • Transformers 4.40.0
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.2
  • Tokenizers 0.19.1
Downloads last month
10,521
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for nurzhanit/whisper-enhanced-ml

Unable to build the model tree, the base model loops to the model itself. Learn more.

Dataset used to train nurzhanit/whisper-enhanced-ml

Evaluation results