license: apache-2.0 | |
base_model: Qwen/Qwen3-14B | |
[EXL3](https://github.com/turboderp-org/exllamav3) quantization of [Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B), 4 bits per weight. | |
license: apache-2.0 | |
base_model: Qwen/Qwen3-14B | |
[EXL3](https://github.com/turboderp-org/exllamav3) quantization of [Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B), 4 bits per weight. | |