Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
免费去水印
Log In
Sign Up
solarkyle
/
GLM-4.7-Flash-GGUF
like
2
Text Generation
GGUF
quantized
llama-cpp
llama.cpp
Mixture of Experts
glm4
4-bit precision
Q4_K_M
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
GLM-4.7-Flash-GGUF
18.1 GB
1 contributor
History:
8 commits
solarkyle
Update README.md
4b16fc0
verified
about 1 month ago
.gitattributes
Safe
1.58 kB
Upload GLM-4.7-Flash-Q4_K_M.gguf with huggingface_hub
about 1 month ago
GLM-4.7-Flash-Q4_K_M.gguf
18.1 GB
xet
Upload GLM-4.7-Flash-Q4_K_M.gguf with huggingface_hub
about 1 month ago
README.md
Safe
4.06 kB
Update README.md
about 1 month ago
×
🎉 Free Image Generator Now Available!
Totally Free + Zero Barriers + No Login Required
Visit Now