Llama-2-13b-chat-hf - bnb 4bit
- Model creator: Meta
- Original model: Llama-2-13b-chat-hf
Description
This model is 4bit quantized version of Llama-2-13b-chat-hf using bitsandbytes. It's designed for fine-tuning! The PAD token is set as UNK.
- Downloads last month
- 10
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model authors have turned it off explicitly.
Model tree for itsanurag/Llama-2-13b-Chat-4BitQuantized
Base model
meta-llama/Llama-2-13b-chat-hf