--- base_model: Goekdeniz-Guelmez/Josiefied-Qwen3-4B-abliterated-v1 --- [EXL3](https://github.com/turboderp-org/exllamav3) quantization of [Josiefied-Qwen3-4B-abliterated-v1](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-4B-abliterated-v1), 8 bits per weight, including output layers. ### HumanEval (argmax) | Model | Q4 | Q6 | Q8 | FP16 | | ------------------------------------------------------------------------------------------------------------------------------ | ----- | ----- | ----- | ----- | | [Josiefied-Qwen3-4B-abliterated-v1-exl3-4bpw](https://huggingface.co/isogen/Josiefied-Qwen3-4B-abliterated-v1-exl3-4bpw) | 79.3% | 78.7% | 78.7% | 80.5% | | [Josiefied-Qwen3-4B-abliterated-v1-exl3-6bpw](https://huggingface.co/isogen/Josiefied-Qwen3-4B-abliterated-v1-exl3-6bpw) | 79.3% | 78.0% | 78.7% | 78.7% | | [Josiefied-Qwen3-4B-abliterated-v1-exl3-8bpw-h8](https://huggingface.co/isogen/Josiefied-Qwen3-4B-abliterated-v1-exl3-8bpw-h8) | 79.3% | 78.0% | 76.8% | 78.0% | | [Qwen3-4B-exl3-4bpw](https://huggingface.co/isogen/Qwen3-4B-exl3-4bpw) | 80.5% | 81.1% | 81.7% | 80.5% | | [Qwen3-4B-exl3-6bpw](https://huggingface.co/isogen/Qwen3-4B-exl3-6bpw) | 80.5% | 85.4% | 86.0% | 86.0% | | [Qwen3-4B-exl3-8bpw-h8](https://huggingface.co/isogen/Qwen3-4B-exl3-8bpw-h8) | 82.3% | 84.8% | 83.5% | 82.9% |