Genearted from https://github.com/yhyu13/AutoGPTQ.git branch cuda_dev
Original weight: https://huggingface.co/tiiuae/falcon-7b-instruct
Note, autogptq does not generate 128 group size successfully when evaluating, at this moment. So the group size is 64
- Downloads last month
- 12
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The HF Inference API does not support model that require custom code execution.