Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Mungert
/
Qwen3-1.7B-GGUF
like
0
Text Generation
Transformers
GGUF
conversational
arxiv:
2505.09388
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Train
Deploy
Use this model
main
Qwen3-1.7B-GGUF
Ctrl+K
Ctrl+K
1 contributor
History:
56 commits
Mungert
Delete file(s) containing ['q2_k_l.gguf', 'q3_k_l.gguf', 'q4_k_l.gguf', 'q5_k_l.gguf', 'q6_k_l.gguf', 'q4_0_l.gguf', 'q5_0_l.gguf', 'q4_1_l.gguf', 'q5_1_l.gguf', 'bf16_q6_k.gguf']
012eedf
verified
27 days ago
.gitattributes
Safe
3.58 kB
Upload Qwen3-1.7B-bf16.gguf with huggingface_hub
4 months ago
Qwen3-1.7B-bf16.gguf
Safe
4.07 GB
xet
Upload Qwen3-1.7B-bf16.gguf with huggingface_hub
4 months ago
Qwen3-1.7B-bf16_q8_0.gguf
Safe
2.75 GB
xet
Upload Qwen3-1.7B-bf16_q8_0.gguf with huggingface_hub
4 months ago
Qwen3-1.7B-f16_q8_0.gguf
Safe
3.24 GB
xet
Upload Qwen3-1.7B-f16_q8_0.gguf with huggingface_hub
4 months ago
Qwen3-1.7B-iq3_m.gguf
Safe
1.06 GB
xet
Upload Qwen3-1.7B-iq3_m.gguf with huggingface_hub
4 months ago
Qwen3-1.7B-iq3_s.gguf
Safe
1.04 GB
xet
Upload Qwen3-1.7B-iq3_s.gguf with huggingface_hub
4 months ago
Qwen3-1.7B-iq3_xs.gguf
Safe
1.01 GB
xet
Upload Qwen3-1.7B-iq3_xs.gguf with huggingface_hub
4 months ago
Qwen3-1.7B-iq3_xxs.gguf
Safe
972 MB
xet
Upload Qwen3-1.7B-iq3_xxs.gguf with huggingface_hub
4 months ago
Qwen3-1.7B-iq4_nl.gguf
Safe
1.23 GB
xet
Upload Qwen3-1.7B-iq4_nl.gguf with huggingface_hub
4 months ago
Qwen3-1.7B-iq4_xs.gguf
Safe
1.18 GB
xet
Upload Qwen3-1.7B-iq4_xs.gguf with huggingface_hub
4 months ago
Qwen3-1.7B-q3_k_m.gguf
Safe
1.19 GB
xet
Upload Qwen3-1.7B-q3_k_m.gguf with huggingface_hub
4 months ago
Qwen3-1.7B-q3_k_s.gguf
Safe
1.05 GB
xet
Upload Qwen3-1.7B-q3_k_s.gguf with huggingface_hub
4 months ago
Qwen3-1.7B-q4_0.gguf
Safe
1.15 GB
xet
Upload Qwen3-1.7B-q4_0.gguf with huggingface_hub
4 months ago
Qwen3-1.7B-q4_1.gguf
Safe
1.28 GB
xet
Upload Qwen3-1.7B-q4_1.gguf with huggingface_hub
4 months ago
Qwen3-1.7B-q4_k_m.gguf
Safe
1.35 GB
xet
Upload Qwen3-1.7B-q4_k_m.gguf with huggingface_hub
4 months ago
Qwen3-1.7B-q4_k_s.gguf
Safe
1.32 GB
xet
Upload Qwen3-1.7B-q4_k_s.gguf with huggingface_hub
4 months ago
Qwen3-1.7B-q5_0.gguf
Safe
1.4 GB
xet
Upload Qwen3-1.7B-q5_0.gguf with huggingface_hub
4 months ago
Qwen3-1.7B-q5_1.gguf
Safe
1.53 GB
xet
Upload Qwen3-1.7B-q5_1.gguf with huggingface_hub
4 months ago
Qwen3-1.7B-q5_k_m.gguf
Safe
1.52 GB
xet
Upload Qwen3-1.7B-q5_k_m.gguf with huggingface_hub
4 months ago
Qwen3-1.7B-q5_k_s.gguf
Safe
1.5 GB
xet
Upload Qwen3-1.7B-q5_k_s.gguf with huggingface_hub
4 months ago
Qwen3-1.7B-q6_k_m.gguf
Safe
1.67 GB
xet
Upload Qwen3-1.7B-q6_k_m.gguf with huggingface_hub
4 months ago
Qwen3-1.7B-q8_0.gguf
Safe
2.17 GB
xet
Upload Qwen3-1.7B-q8_0.gguf with huggingface_hub
4 months ago
Qwen3-1.7B.imatrix
Safe
2.07 MB
xet
Upload Qwen3-1.7B.imatrix with huggingface_hub
4 months ago
README.md
Safe
23.5 kB
Upload README.md with huggingface_hub
3 months ago