Genearted from https://github.com/yhyu13/AutoGPTQ.git branch cuda_dev

Original weight: https://huggingface.co/tiiuae/falcon-7b

Note this is the quantization of the base model, where base model is not fined-tuned with chat instructions yet

Downloads last month
7
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support model that require custom code execution.