--- base_model: google/gemma-3-12b-it --- [EXL3](https://github.com/turboderp-org/exllamav3) quantization of [gemma-3-12b-it](https://huggingface.co/unsloth/gemma-3-12b-it), 6 bits per weight. ### HumanEval (argmax) | Model | Q4 | Q6 | Q8 | FP16 | | ---------------------------------------------------------------------------------- | ---- | ---- | ---- | ---- | | [gemma-3-12b-it-exl3-4bpw](https://huggingface.co/isogen/gemma-3-12b-it-exl3-4bpw) | 82.9 | 82.9 | 83.5 | 83.5 | | [gemma-3-12b-it-exl3-6bpw](https://huggingface.co/isogen/gemma-3-12b-it-exl3-6bpw) | 83.5 | 81.7 | 82.3 | 82.3 |