Version3ASAP_FineTuningBERT_AugV12_k10_task1_organization_k10_k10_fold1
This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.3482
- Qwk: 0.4694
- Mse: 1.3477
- Rmse: 1.1609
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
Training results
Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
---|---|---|---|---|---|---|
No log | 1.0 | 3 | 6.8445 | 0.0 | 6.8421 | 2.6157 |
No log | 2.0 | 6 | 4.2982 | 0.0079 | 4.2962 | 2.0727 |
No log | 3.0 | 9 | 3.0719 | 0.0 | 3.0701 | 1.7522 |
No log | 4.0 | 12 | 2.2654 | 0.0509 | 2.2638 | 1.5046 |
No log | 5.0 | 15 | 1.5906 | 0.0211 | 1.5891 | 1.2606 |
No log | 6.0 | 18 | 1.2486 | 0.0 | 1.2472 | 1.1168 |
No log | 7.0 | 21 | 1.0013 | 0.0 | 1.0000 | 1.0000 |
No log | 8.0 | 24 | 0.8897 | 0.1661 | 0.8886 | 0.9426 |
No log | 9.0 | 27 | 0.8663 | 0.0749 | 0.8653 | 0.9302 |
No log | 10.0 | 30 | 0.8147 | 0.1312 | 0.8138 | 0.9021 |
No log | 11.0 | 33 | 1.1934 | 0.0429 | 1.1926 | 1.0921 |
No log | 12.0 | 36 | 1.2848 | 0.3013 | 1.2842 | 1.1332 |
No log | 13.0 | 39 | 0.9429 | 0.4196 | 0.9424 | 0.9708 |
No log | 14.0 | 42 | 1.5947 | 0.2706 | 1.5945 | 1.2627 |
No log | 15.0 | 45 | 1.1223 | 0.4601 | 1.1220 | 1.0592 |
No log | 16.0 | 48 | 0.6823 | 0.6325 | 0.6819 | 0.8258 |
No log | 17.0 | 51 | 2.2544 | 0.3233 | 2.2543 | 1.5014 |
No log | 18.0 | 54 | 1.4071 | 0.4862 | 1.4069 | 1.1861 |
No log | 19.0 | 57 | 0.5306 | 0.6917 | 0.5304 | 0.7283 |
No log | 20.0 | 60 | 1.8187 | 0.4227 | 1.8186 | 1.3486 |
No log | 21.0 | 63 | 0.8853 | 0.6124 | 0.8852 | 0.9409 |
No log | 22.0 | 66 | 0.6079 | 0.6486 | 0.6076 | 0.7795 |
No log | 23.0 | 69 | 1.3528 | 0.4844 | 1.3526 | 1.1630 |
No log | 24.0 | 72 | 0.9655 | 0.5518 | 0.9653 | 0.9825 |
No log | 25.0 | 75 | 1.1439 | 0.5044 | 1.1437 | 1.0694 |
No log | 26.0 | 78 | 0.6485 | 0.6657 | 0.6482 | 0.8051 |
No log | 27.0 | 81 | 1.0402 | 0.5597 | 1.0399 | 1.0198 |
No log | 28.0 | 84 | 1.0703 | 0.5020 | 1.0698 | 1.0343 |
No log | 29.0 | 87 | 1.7184 | 0.3911 | 1.7179 | 1.3107 |
No log | 30.0 | 90 | 0.6999 | 0.6274 | 0.6994 | 0.8363 |
No log | 31.0 | 93 | 0.7360 | 0.6213 | 0.7356 | 0.8577 |
No log | 32.0 | 96 | 1.3706 | 0.4827 | 1.3701 | 1.1705 |
No log | 33.0 | 99 | 0.8476 | 0.6069 | 0.8471 | 0.9204 |
No log | 34.0 | 102 | 1.4165 | 0.4730 | 1.4160 | 1.1899 |
No log | 35.0 | 105 | 1.3546 | 0.5066 | 1.3542 | 1.1637 |
No log | 36.0 | 108 | 0.9417 | 0.5729 | 0.9412 | 0.9701 |
No log | 37.0 | 111 | 1.3669 | 0.4434 | 1.3663 | 1.1689 |
No log | 38.0 | 114 | 1.0415 | 0.5570 | 1.0409 | 1.0203 |
No log | 39.0 | 117 | 1.3482 | 0.4694 | 1.3477 | 1.1609 |
Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 2
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for genki10/Version3ASAP_FineTuningBERT_AugV12_k10_task1_organization_k10_k10_fold1
Base model
google-bert/bert-base-uncased