eeizenman commited on
Commit
9ea6f8e
·
verified ·
1 Parent(s): f1ee498

End of training

Browse files
README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: bsd-3-clause
4
+ base_model: MIT/ast-finetuned-audioset-10-10-0.4593
5
+ tags:
6
+ - generated_from_trainer
7
+ datasets:
8
+ - marsyas/gtzan
9
+ metrics:
10
+ - accuracy
11
+ model-index:
12
+ - name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
13
+ results:
14
+ - task:
15
+ name: Audio Classification
16
+ type: audio-classification
17
+ dataset:
18
+ name: GTZAN
19
+ type: marsyas/gtzan
20
+ config: all
21
+ split: train
22
+ args: all
23
+ metrics:
24
+ - name: Accuracy
25
+ type: accuracy
26
+ value: 0.88
27
+ ---
28
+
29
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
+ should probably proofread and complete it, then remove this comment. -->
31
+
32
+ # ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
33
+
34
+ This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
35
+ It achieves the following results on the evaluation set:
36
+ - Loss: 0.5067
37
+ - Accuracy: 0.88
38
+
39
+ ## Model description
40
+
41
+ More information needed
42
+
43
+ ## Intended uses & limitations
44
+
45
+ More information needed
46
+
47
+ ## Training and evaluation data
48
+
49
+ More information needed
50
+
51
+ ## Training procedure
52
+
53
+ ### Training hyperparameters
54
+
55
+ The following hyperparameters were used during training:
56
+ - learning_rate: 5e-05
57
+ - train_batch_size: 4
58
+ - eval_batch_size: 4
59
+ - seed: 42
60
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
61
+ - lr_scheduler_type: linear
62
+ - lr_scheduler_warmup_ratio: 0.1
63
+ - num_epochs: 10
64
+ - mixed_precision_training: Native AMP
65
+
66
+ ### Training results
67
+
68
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
+ | 1.0433 | 1.0 | 225 | 0.9966 | 0.67 |
71
+ | 0.1742 | 2.0 | 450 | 1.1221 | 0.73 |
72
+ | 0.8632 | 3.0 | 675 | 0.9182 | 0.79 |
73
+ | 0.0054 | 4.0 | 900 | 0.9570 | 0.82 |
74
+ | 0.0002 | 5.0 | 1125 | 0.9579 | 0.8 |
75
+ | 0.003 | 6.0 | 1350 | 0.5792 | 0.86 |
76
+ | 0.0001 | 7.0 | 1575 | 0.5325 | 0.89 |
77
+ | 0.0001 | 8.0 | 1800 | 0.5337 | 0.9 |
78
+ | 0.0001 | 9.0 | 2025 | 0.5120 | 0.89 |
79
+ | 0.0001 | 10.0 | 2250 | 0.5067 | 0.88 |
80
+
81
+
82
+ ### Framework versions
83
+
84
+ - Transformers 4.50.0.dev0
85
+ - Pytorch 2.5.1+cu124
86
+ - Datasets 3.3.2
87
+ - Tokenizers 0.21.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bf8bde45bd6dc0162b231546be433f1274fbf9aeb37caf761fa2c7b8c821af1b
3
  size 344814656
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e858cf21c33a8808eada810a6232cab104be7457e7ff32730041d47557e416e
3
  size 344814656
runs/Feb23_16-38-33_b72e4e08c94b/events.out.tfevents.1740328739.b72e4e08c94b.3275.1 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:831de436a1e8d34822cc2f030e4dc52143c6387b3bac99d85362e92dbacb5c89
3
- size 103651
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16fe45fc84c23fa56be1c45df34b49ca66f390bf4b466cb32a2717b73aa51d39
3
+ size 104005