base_model: Goekdeniz-Guelmez/Josiefied-Qwen3-14B-abliterated-v3 | |
[EXL3](https://github.com/turboderp-org/exllamav3) quantization of [Josiefied-Qwen3-14B-abliterated-v3](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-14B-abliterated-v3), 4 bits per weight. | |
### HumanEval (argmax) | |
| Model | Q4 | Q6 | Q8 | FP16 | | |
| -------------------------------------------------------------------------------------------------------------------------- | ---- | ---- | ---- | ---- | | |
| [Josiefied-Qwen3-14B-abliterated-v3-exl3-4bpw](https://huggingface.co/isogen/Josiefied-Qwen3-14B-abliterated-v3-exl3-4bpw) | 71.3 | 70.1 | 69.5 | 71.3 | | |
| [Josiefied-Qwen3-14B-abliterated-v3-exl3-6bpw](https://huggingface.co/isogen/Josiefied-Qwen3-14B-abliterated-v3-exl3-6bpw) | 73.2 | 78.0 | 76.2 | 75.6 | | |
| [Qwen3-14B-exl3-4bpw](https://huggingface.co/isogen/Qwen3-14B-exl3-4bpw) | 88.4 | 89.0 | 89.0 | 89.0 | | |
| [Qwen3-14B-exl3-6bpw](https://huggingface.co/isogen/Qwen3-14B-exl3-6bpw) | 89.6 | 88.4 | 89.6 | 89.0 | | |