About

imatrix quants of Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v3

File Name Description
nqlsg_Q5_K_M.gguf This file contains the quantized model weights using the Q5_K_M quantization type.
nqlsg_dynamic.gguf This file contains the model weights with experimental dynamic IQ3_XXS quantization applied.

For more information on dynamic quantization, see (https://huggingface.co/Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v3-GGUF/discussions/1).

Downloads last month
92
GGUF
Model size
14.8B params
Architecture
qwen2

5-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v3-GGUF

Quantized
(3)
this model

Space using Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v3-GGUF 1