Sharded GGUF version of internlm/internlm2_5-1_8b-chat-gguf.

Downloads last month
7
GGUF
Model size
1.89B params
Architecture
internlm2
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for mitulagr2/gguf-sharded-q5_k_m-internlm2_5-1_8b-chat

Quantized
(2)
this model