Brief Descrption

Llama 2 7B base model fine-tuned on 1000 random samples from the Alpaca GPT-4 instruction dataset using QLORA and 4-bit quantization.

This is a demo of how an LLM can be fine-tuned in such low-resource environment as Google Colab.

You can find more details about the experiment in the Colab notebook used to fine-tune the model here.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.