beto-prostata-fullft
This model is a fine-tuned version of dccuchile/bert-base-spanish-wwm-cased on the None dataset. It achieves the following results on the evaluation set:
- Report: {'BIOMARCADOR': {'precision': 0.08163265306122448, 'recall': 0.00847457627118644, 'f1-score': 0.015355086372360842, 'support': 472}, 'CANCER': {'precision': 0.5559174809989142, 'recall': 0.4970873786407767, 'f1-score': 0.5248590466427472, 'support': 1030}, 'CIRUGIA': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 306}, 'DOSIS': {'precision': 0.1111111111111111, 'recall': 0.008097165991902834, 'f1-score': 0.01509433962264151, 'support': 247}, 'EDAD': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 93}, 'FECHA': {'precision': 0.48116325181758096, 'recall': 0.7599164926931107, 'f1-score': 0.5892351274787536, 'support': 958}, 'GLEASON': {'precision': 0.7075, 'recall': 0.6316964285714286, 'f1-score': 0.6674528301886793, 'support': 448}, 'MEDICAMENTO': {'precision': 0.6852248394004282, 'recall': 0.8888888888888888, 'f1-score': 0.7738814993954051, 'support': 360}, 'TNM': {'precision': 0.7696245733788396, 'recall': 0.8351851851851851, 'f1-score': 0.8010657193605684, 'support': 540}, 'TRATAMIENTO': {'precision': 0.4125, 'recall': 0.2426470588235294, 'f1-score': 0.3055555555555555, 'support': 408}, 'micro avg': {'precision': 0.5720076299475441, 'recall': 0.4934183463595228, 'f1-score': 0.5298144876325088, 'support': 4862}, 'macro avg': {'precision': 0.38046739097680987, 'recall': 0.38719931750660086, 'f1-score': 0.36924992046167116, 'support': 4862}, 'weighted avg': {'precision': 0.4621683811534748, 'recall': 0.4934183463595228, 'f1-score': 0.46296304723353965, 'support': 4862}}
- Loss: 0.7854
- F1: 0.5298
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Report | Validation Loss | F1 |
---|---|---|---|---|---|
0.9684 | 2.5773 | 500 | {'BIOMARCADOR': {'precision': 0.08163265306122448, 'recall': 0.00847457627118644, 'f1-score': 0.015355086372360842, 'support': 472}, 'CANCER': {'precision': 0.5559174809989142, 'recall': 0.4970873786407767, 'f1-score': 0.5248590466427472, 'support': 1030}, 'CIRUGIA': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 306}, 'DOSIS': {'precision': 0.1111111111111111, 'recall': 0.008097165991902834, 'f1-score': 0.01509433962264151, 'support': 247}, 'EDAD': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 93}, 'FECHA': {'precision': 0.48116325181758096, 'recall': 0.7599164926931107, 'f1-score': 0.5892351274787536, 'support': 958}, 'GLEASON': {'precision': 0.7075, 'recall': 0.6316964285714286, 'f1-score': 0.6674528301886793, 'support': 448}, 'MEDICAMENTO': {'precision': 0.6852248394004282, 'recall': 0.8888888888888888, 'f1-score': 0.7738814993954051, 'support': 360}, 'TNM': {'precision': 0.7696245733788396, 'recall': 0.8351851851851851, 'f1-score': 0.8010657193605684, 'support': 540}, 'TRATAMIENTO': {'precision': 0.4125, 'recall': 0.2426470588235294, 'f1-score': 0.3055555555555555, 'support': 408}, 'micro avg': {'precision': 0.5720076299475441, 'recall': 0.4934183463595228, 'f1-score': 0.5298144876325088, 'support': 4862}, 'macro avg': {'precision': 0.38046739097680987, 'recall': 0.38719931750660086, 'f1-score': 0.36924992046167116, 'support': 4862}, 'weighted avg': {'precision': 0.4621683811534748, 'recall': 0.4934183463595228, 'f1-score': 0.46296304723353965, 'support': 4862}} | 0.7854 | 0.5298 |
Framework versions
- PEFT 0.16.0
- Transformers 4.54.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.2
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for KevinAlejandro17/beto-prostata-fullft
Base model
dccuchile/bert-base-spanish-wwm-cased