bluesky-spanish-classifier
This model is a fine-tuned version of dccuchile/bert-base-spanish-wwm-uncased on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 2.6157
- Classification Report: {'ar': {'precision': 0.46107784431137727, 'recall': 0.3938618925831202, 'f1-score': 0.42482758620689653, 'support': 391.0}, 'cl': {'precision': 0.4326241134751773, 'recall': 0.4236111111111111, 'f1-score': 0.4280701754385965, 'support': 576.0}, 'co': {'precision': 0.401840490797546, 'recall': 0.3819241982507289, 'f1-score': 0.39162929745889385, 'support': 343.0}, 'es': {'precision': 0.4861407249466951, 'recall': 0.4175824175824176, 'f1-score': 0.4492610837438424, 'support': 546.0}, 'mx': {'precision': 0.4580152671755725, 'recall': 0.5136986301369864, 'f1-score': 0.48426150121065376, 'support': 584.0}, 'pe': {'precision': 0.301255230125523, 'recall': 0.37305699481865284, 'f1-score': 0.3333333333333333, 'support': 386.0}, 'accuracy': 0.42498230714791224, 'macro avg': {'precision': 0.4234922784719819, 'recall': 0.4172892074138362, 'f1-score': 0.41856382956536947, 'support': 2826.0}, 'weighted avg': {'precision': 0.4304679708106477, 'recall': 0.42498230714791224, 'f1-score': 0.4259648943332467, 'support': 2826.0}}
- F1: 0.4186
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.8600231011639855e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.11531859504380029
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Classification Report | F1 |
---|---|---|---|---|---|
1.4573 | 1.0 | 825 | 1.5809 | {'ar': {'precision': 0.5341880341880342, 'recall': 0.319693094629156, 'f1-score': 0.4, 'support': 391.0}, 'cl': {'precision': 0.44039735099337746, 'recall': 0.2309027777777778, 'f1-score': 0.30296127562642367, 'support': 576.0}, 'co': {'precision': 0.45930232558139533, 'recall': 0.2303206997084548, 'f1-score': 0.3067961165048544, 'support': 343.0}, 'es': {'precision': 0.3184438040345821, 'recall': 0.8095238095238095, 'f1-score': 0.45708376421923474, 'support': 546.0}, 'mx': {'precision': 0.42574257425742573, 'recall': 0.3681506849315068, 'f1-score': 0.3948576675849403, 'support': 584.0}, 'pe': {'precision': 0.38222222222222224, 'recall': 0.22279792746113988, 'f1-score': 0.281505728314239, 'support': 386.0}, 'accuracy': 0.3821656050955414, 'macro avg': {'precision': 0.4267160518795062, 'recall': 0.36356483233864084, 'f1-score': 0.357200758708282, 'support': 2826.0}, 'weighted avg': {'precision': 0.4211319360796609, 'recall': 0.3821656050955414, 'f1-score': 0.3626902289400526, 'support': 2826.0}} | 0.3572 |
1.6236 | 2.0 | 1650 | 1.5003 | {'ar': {'precision': 0.5714285714285714, 'recall': 0.3375959079283887, 'f1-score': 0.42443729903536975, 'support': 391.0}, 'cl': {'precision': 0.4013377926421405, 'recall': 0.4166666666666667, 'f1-score': 0.4088586030664395, 'support': 576.0}, 'co': {'precision': 0.5144508670520231, 'recall': 0.2594752186588921, 'f1-score': 0.3449612403100775, 'support': 343.0}, 'es': {'precision': 0.44165170556552963, 'recall': 0.45054945054945056, 'f1-score': 0.4460562103354488, 'support': 546.0}, 'mx': {'precision': 0.38311688311688313, 'recall': 0.6061643835616438, 'f1-score': 0.46949602122015915, 'support': 584.0}, 'pe': {'precision': 0.33819241982507287, 'recall': 0.3005181347150259, 'f1-score': 0.31824417009602196, 'support': 386.0}, 'accuracy': 0.4164897381457891, 'macro avg': {'precision': 0.4416963732717034, 'recall': 0.3951616270133447, 'f1-score': 0.4020089240105862, 'support': 2826.0}, 'weighted avg': {'precision': 0.4339986385070083, 'recall': 0.4164897381457891, 'f1-score': 0.41059938485783715, 'support': 2826.0}} | 0.4020 |
0.7224 | 3.0 | 2475 | 1.7251 | {'ar': {'precision': 0.5352112676056338, 'recall': 0.3887468030690537, 'f1-score': 0.45037037037037037, 'support': 391.0}, 'cl': {'precision': 0.4388609715242881, 'recall': 0.4548611111111111, 'f1-score': 0.44671781756180734, 'support': 576.0}, 'co': {'precision': 0.291005291005291, 'recall': 0.48104956268221577, 'f1-score': 0.3626373626373626, 'support': 343.0}, 'es': {'precision': 0.4835164835164835, 'recall': 0.40293040293040294, 'f1-score': 0.43956043956043955, 'support': 546.0}, 'mx': {'precision': 0.4577922077922078, 'recall': 0.4828767123287671, 'f1-score': 0.47, 'support': 584.0}, 'pe': {'precision': 0.3289902280130293, 'recall': 0.2616580310880829, 'f1-score': 0.29148629148629146, 'support': 386.0}, 'accuracy': 0.4182590233545648, 'macro avg': {'precision': 0.42256274157615564, 'recall': 0.4120204372016056, 'f1-score': 0.4101287136027119, 'support': 2826.0}, 'weighted avg': {'precision': 0.43177891628106374, 'recall': 0.4182590233545648, 'f1-score': 0.4192436665352936, 'support': 2826.0}} | 0.4101 |
0.3648 | 4.0 | 3300 | 2.1768 | {'ar': {'precision': 0.3978260869565217, 'recall': 0.4680306905370844, 'f1-score': 0.4300822561692127, 'support': 391.0}, 'cl': {'precision': 0.4788732394366197, 'recall': 0.3541666666666667, 'f1-score': 0.40718562874251496, 'support': 576.0}, 'co': {'precision': 0.35443037974683544, 'recall': 0.40816326530612246, 'f1-score': 0.3794037940379404, 'support': 343.0}, 'es': {'precision': 0.4577702702702703, 'recall': 0.49633699633699635, 'f1-score': 0.47627416520210897, 'support': 546.0}, 'mx': {'precision': 0.47755834829443444, 'recall': 0.4554794520547945, 'f1-score': 0.4662576687116564, 'support': 584.0}, 'pe': {'precision': 0.3106060606060606, 'recall': 0.31865284974093266, 'f1-score': 0.3145780051150895, 'support': 386.0}, 'accuracy': 0.4200283085633404, 'macro avg': {'precision': 0.412844064218457, 'recall': 0.4168049867737662, 'f1-score': 0.4122969196630872, 'support': 2826.0}, 'weighted avg': {'precision': 0.4252233505074715, 'recall': 0.4200283085633404, 'f1-score': 0.41988813459845997, 'support': 2826.0}} | 0.4123 |
0.2228 | 5.0 | 4125 | 2.6157 | {'ar': {'precision': 0.46107784431137727, 'recall': 0.3938618925831202, 'f1-score': 0.42482758620689653, 'support': 391.0}, 'cl': {'precision': 0.4326241134751773, 'recall': 0.4236111111111111, 'f1-score': 0.4280701754385965, 'support': 576.0}, 'co': {'precision': 0.401840490797546, 'recall': 0.3819241982507289, 'f1-score': 0.39162929745889385, 'support': 343.0}, 'es': {'precision': 0.4861407249466951, 'recall': 0.4175824175824176, 'f1-score': 0.4492610837438424, 'support': 546.0}, 'mx': {'precision': 0.4580152671755725, 'recall': 0.5136986301369864, 'f1-score': 0.48426150121065376, 'support': 584.0}, 'pe': {'precision': 0.301255230125523, 'recall': 0.37305699481865284, 'f1-score': 0.3333333333333333, 'support': 386.0}, 'accuracy': 0.42498230714791224, 'macro avg': {'precision': 0.4234922784719819, 'recall': 0.4172892074138362, 'f1-score': 0.41856382956536947, 'support': 2826.0}, 'weighted avg': {'precision': 0.4304679708106477, 'recall': 0.42498230714791224, 'f1-score': 0.4259648943332467, 'support': 2826.0}} | 0.4186 |
Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for mariagrandury/bluesky-spanish-classifier
Base model
dccuchile/bert-base-spanish-wwm-uncased