These GGUF quants were made from https://huggingface.co/zai-org/GLM-ASR-Nano-2512 and designed for use in KoboldCpp 1.104 and above.
Contains 3 GGUF quants of GLM-ASR-Nano-2512, as well as the associated mmproj file.
To use:
- Download the main model (GLM-ASR-Nano-1.6B-2512-Q4_K.gguf) and the mmproj (mmproj-GLM-ASR-Nano-2512-Q8_0.gguf)
- Launch KoboldCpp and go to Loaded Files tab
- Select the main model as "Text Model" and the mmproj as "mmproj"
- Downloads last month
- 767
Hardware compatibility
Log In
to view the estimation
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for concedo/GLM-ASR-Nano-2512-GGUF
Base model
zai-org/GLM-ASR-Nano-2512