base_model: Goekdeniz-Guelmez/Josiefied-DeepSeek-R1-0528-Qwen3-8B-abliterated-v1 | |
[EXL3](https://github.com/turboderp-org/exllamav3) quantization of [Josiefied-DeepSeek-R1-0528-Qwen3-8B-abliterated-v1](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-DeepSeek-R1-0528-Qwen3-8B-abliterated-v1), 8 bits per weight, including output layers. | |
### HumanEval (argmax) | |
| Model | Q4 | Q6 | Q8 | FP16 | | |
| ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---- | ---- | ---- | ---- | | |
| [Josiefied-DeepSeek-R1-0528-Qwen3-8B-abliterated-v1-exl3-8bpw-h8](https://huggingface.co/isogen/Josiefied-DeepSeek-R1-0528-Qwen3-8B-abliterated-v1-exl3-8bpw-h8) | 75.6 | 76.8 | 79.3 | 80.5 | | |
| [DeepSeek-R1-0528-Qwen3-8B-exl3-8bpw](https://huggingface.co/bullerwins/DeepSeek-R1-0528-Qwen3-8B-exl3-8.0bpw) | 79.9 | 77.4 | 78.7 | 79.3 | | |