datasets: ConvLab/tm1 | |
# gpt2-medium-nlg-tm1_tm2_tm3 | |
This model is a fine-tuned version of [GPT2-medium](https://huggingface.co/gpt2-medium) on [TaskMaster1](https://huggingface.co/datasets/ConvLab/tm1), [TaskMaster2](https://huggingface.co/datasets/ConvLab/tm2) and [TaskMaster3](https://huggingface.co/datasets/ConvLab/tm3) | |
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage. | |
## Training procedure | |
### Training hyperparameters | |
The following hyperparameters were used during training: | |
- learning_rate: 5e-5 | |
- train_batch_size: 64 | |
- gradient_accumulation_steps: 2 | |
- total_train_batch_size: 128 | |
- optimizer: AdamW | |
- lr_scheduler_type: linear | |
- num_epochs: 20 | |
### Framework versions | |
- Transformers 4.23.1 | |
- Pytorch 1.10.1+cu111 | |