Update README.md
Browse filesTraining hyperparameters:
The following hyperparameters were used during training:
learning_rate: 5e-05
train_batch_size: 16
eval_batch_size: 16
seed: 42
gradient_accumulation_steps: 2
total_train_batch_size: 32
optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
lr_scheduler_type: linear
num_epochs: 10
mixed_precision_training: Native AMP
Training results
Framework versions
Transformers 4.35.2
Pytorch 2.0.0
Datasets 2.15.0
Tokenizers 0.15.0