Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

Whisper Turbo ko

This model is a fine-tuned version of openai/whisper-large-v3-turbo on the custom dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0725

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 64
  • eval_batch_size: 256
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 200
  • training_steps: 1000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.7644 0.0625 10 1.3650
0.6675 0.125 20 1.1146
0.3476 0.1875 30 0.6689
0.2057 0.25 40 0.5514
0.1482 0.3125 50 0.4671
0.1417 0.375 60 0.4373
0.1271 0.4375 70 0.3908
0.1216 0.5 80 0.3624
0.0983 0.5625 90 0.3465
0.0966 0.625 100 0.3262
0.0907 0.6875 110 0.3118
0.0909 0.75 120 0.2852
0.0903 0.8125 130 0.2766
0.0783 0.875 140 0.2729
0.0761 0.9375 150 0.2519
0.074 1.0 160 0.2351
0.0675 1.0625 170 0.2196
0.0784 1.125 180 0.1990
0.057 1.1875 190 0.1988
0.0577 1.25 200 0.2089
0.0575 1.3125 210 0.2039
0.0653 1.375 220 0.1941
0.0681 1.4375 230 0.1894
0.0923 1.5 240 0.1880
0.0678 1.5625 250 0.1892
0.0685 1.625 260 0.1796
0.0643 1.6875 270 0.1687
0.0654 1.75 280 0.1668
0.0672 1.8125 290 0.1721
0.0692 1.875 300 0.1631
0.0728 1.9375 310 0.1602
0.0808 2.0 320 0.1884
0.0583 2.0625 330 0.1891
0.0531 2.125 340 0.1755
0.0647 2.1875 350 0.1793
0.0525 2.25 360 0.1651
0.0591 2.3125 370 0.1585
0.0488 2.375 380 0.1495
0.042 2.4375 390 0.1461
0.0491 2.5 400 0.1323
0.0472 2.5625 410 0.1390
0.05 2.625 420 0.1395
0.0377 2.6875 430 0.1424
0.0743 2.75 440 0.1334
0.0501 2.8125 450 0.1534
0.052 2.875 460 0.1567
0.0474 2.9375 470 0.1433
0.0545 3.0 480 0.1356
0.0296 3.0625 490 0.1285
0.0255 3.125 500 0.1257
0.0361 3.1875 510 0.1216
0.0355 3.25 520 0.1194
0.0294 3.3125 530 0.1193
0.0251 3.375 540 0.1170
0.0331 3.4375 550 0.1151
0.0322 3.5 560 0.1110
0.0347 3.5625 570 0.1105
0.0671 3.625 580 0.1559
0.0323 3.6875 590 0.1433
0.0349 3.75 600 0.1444
0.0344 3.8125 610 0.1398
0.0454 3.875 620 0.1438
0.0282 3.9375 630 0.1427
0.0323 4.0 640 0.1403
0.0258 4.0625 650 0.1361
0.0208 4.125 660 0.1350
0.0218 4.1875 670 0.1325
0.0174 4.25 680 0.1394
0.0238 4.3125 690 0.1333
0.0186 4.375 700 0.1335
0.0226 4.4375 710 0.1337
0.0229 4.5 720 0.1314
0.0234 4.5625 730 0.1289
0.0185 4.625 740 0.1254
0.086 4.6875 750 0.0999
0.0195 4.75 760 0.1028
0.02 4.8125 770 0.1015
0.0199 4.875 780 0.1024
0.0258 4.9375 790 0.0969
0.0196 5.0 800 0.0956
0.0145 5.0625 810 0.0925
0.0132 5.125 820 0.0924
0.0146 5.1875 830 0.0909
0.0124 5.25 840 0.0902
0.0151 5.3125 850 0.0899
0.0142 5.375 860 0.0890
0.0175 5.4375 870 0.0888
0.0596 5.5 880 0.0778
0.0147 5.5625 890 0.0768
0.0178 5.625 900 0.0759
0.0144 5.6875 910 0.0759
0.0131 5.75 920 0.0754
0.0114 5.8125 930 0.0742
0.0145 5.875 940 0.0735
0.0202 5.9375 950 0.0734
0.0146 6.0 960 0.0734
0.0109 6.0625 970 0.0729
0.0102 6.125 980 0.0727
0.0125 6.1875 990 0.0726
0.0099 6.25 1000 0.0725

Framework versions

  • PEFT 0.14.0
  • Transformers 4.47.1
  • Pytorch 2.5.1+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
0
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for nomnoos37/stt-turbo-0102-v1.3

Adapter
(50)
this model