--- base_model: google/gemma-3-4b-it --- [EXL3](https://github.com/turboderp-org/exllamav3) quantization of [gemma-3-4b-it](https://huggingface.co/unsloth/gemma-3-4b-it), 8 bits per weight, including output layers. ### HumanEval (argmax) | Model | Q4 | Q6 | Q8 | FP16 | | -------------------------------------------------------------------------------------- | ---- | ---- | ---- | ---- | | [gemma-3-4b-it-exl3-8bpw-h8](https://huggingface.co/isogen/gemma-3-4b-it-exl3-8bpw-h8) | 72.0 | 73.2 | 71.3 | 70.1 |