saurabhy27-outcomes commited on
Commit
3388588
·
verified ·
1 Parent(s): dda82e2

Model save

Browse files
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: openai/whisper-large-v3
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - wer
9
+ model-index:
10
+ - name: whisper-large-v3-common-n-medical-50-50
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # whisper-large-v3-common-n-medical-50-50
18
+
19
+ This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.3193
22
+ - Wer: 5.2213
23
+
24
+ ## Model description
25
+
26
+ More information needed
27
+
28
+ ## Intended uses & limitations
29
+
30
+ More information needed
31
+
32
+ ## Training and evaluation data
33
+
34
+ More information needed
35
+
36
+ ## Training procedure
37
+
38
+ ### Training hyperparameters
39
+
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 5e-07
42
+ - train_batch_size: 64
43
+ - eval_batch_size: 32
44
+ - seed: 42
45
+ - distributed_type: multi-GPU
46
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
+ - lr_scheduler_type: linear
48
+ - lr_scheduler_warmup_steps: 250
49
+ - training_steps: 5000
50
+ - mixed_precision_training: Native AMP
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
55
+ |:-------------:|:------:|:----:|:---------------:|:------:|
56
+ | 5.126 | 0.0969 | 250 | 0.3694 | 5.6601 |
57
+ | 4.367 | 0.1938 | 500 | 0.3586 | 5.8156 |
58
+ | 4.1514 | 0.2907 | 750 | 0.3511 | 5.8839 |
59
+ | 3.962 | 0.3876 | 1000 | 0.3450 | 5.7805 |
60
+ | 3.9038 | 0.4845 | 1250 | 0.3403 | 6.1746 |
61
+ | 3.8313 | 0.5814 | 1500 | 0.3359 | 5.9738 |
62
+ | 3.7778 | 0.6783 | 1750 | 0.3333 | 5.9218 |
63
+ | 3.7421 | 0.7752 | 2000 | 0.3306 | 6.1327 |
64
+ | 3.7367 | 0.8721 | 2250 | 0.3281 | 5.6561 |
65
+ | 3.6878 | 0.9690 | 2500 | 0.3257 | 5.5154 |
66
+ | 3.6769 | 1.0659 | 2750 | 0.3242 | 5.4803 |
67
+ | 3.6508 | 1.1628 | 3000 | 0.3235 | 5.4634 |
68
+ | 3.6292 | 1.2597 | 3250 | 0.3220 | 5.3512 |
69
+ | 3.6179 | 1.3566 | 3500 | 0.3210 | 5.2254 |
70
+ | 3.6032 | 1.4535 | 3750 | 0.3206 | 5.2207 |
71
+ | 3.5922 | 1.5504 | 4000 | 0.3201 | 5.3038 |
72
+ | 3.5743 | 1.6473 | 4250 | 0.3198 | 5.2633 |
73
+ | 3.5882 | 1.7442 | 4500 | 0.3198 | 5.2254 |
74
+ | 3.6021 | 1.8411 | 4750 | 0.3196 | 5.2186 |
75
+ | 3.5865 | 1.9380 | 5000 | 0.3193 | 5.2213 |
76
+
77
+
78
+ ### Framework versions
79
+
80
+ - Transformers 4.48.0.dev0
81
+ - Pytorch 2.1.0+cu118
82
+ - Datasets 3.2.1.dev0
83
+ - Tokenizers 0.21.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:058f2da8036b83b87d85b87f189ec34d9328d9f4e51a14167b0641817bfacdd5
3
  size 3219908024
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08b2319c033b2b72f29b2e7ed44cb00075731630e87b366dc2436c44dd33d736
3
  size 3219908024
runs/Dec19_08-38-49_b72483eab5b9/events.out.tfevents.1734605148.b72483eab5b9.6985.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d014c34a4745c4eae27097956d59a0d61136c9c9b92c256a31b12f930bf70c8d
3
- size 28054
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea966b8a671286eb2ec2047f79c59f52b5492eae7d31bee2ad10b8e3c6ffc120
3
+ size 28408