Selene-1-Mini
Collection
15 items
•
Updated
•
9
🛝 Playground | 📄 Technical report | 💻 GitHub | 👀 Sign up for the API
This model was quantised into a 4-bit (W4A16) format using GPTQ from AtlaAI/Selene-1-Mini-Llama-3.1-8B
.
This was done using vLLM's llm-compressor library (https://docs.vllm.ai/en/latest/features/quantization/int4.html)
Refer to the original model card for more details on the model.
This quantisation was calibrated using a sample of 512 datapoints from the data used to train Selene-1-Mini. As a result, our quantised models show minimal performance degradation, losing <0.5% overall across benchmarks!
For reference, a GPTQ quantized 8-bit Llama-3.1-8B shows ~1.5% degradation across benchmarks.
Base model
meta-llama/Llama-3.1-8B