MeetPEFT: Parameter Efficient Fine-Tuning on LLMs for Long Meeting Summarization

We use quantized LongLoRA to fine-tune a Llama-2-7b model and extend the context length from 4k to 16k.

The model is fine-tuned on MeetingBank and QMSum datasets.

Downloads last month
30
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Datasets used to train MeetPEFT/MeetPEFT-7B-16K