File size: 2,321 Bytes
b95cc88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0123980
b95cc88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
base_model: Qwen/Qwen3-1.7B-Base
library_name: transformers
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---

*Produced by [Antigma Labs](https://antigma.ai), [Antigma Quantize Space](https://huggingface.co/spaces/Antigma/quantize-my-repo)*

*Follow Antigma Labs in X [https://x.com/antigma_labs](https://x.com/antigma_labs)*

*Antigma's GitHub Homepage [https://github.com/AntigmaLabs](https://github.com/AntigmaLabs)*

## llama.cpp quantization
Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5215">b5215</a> for quantization.
Original model: https://huggingface.co/Qwen/Qwen3-1.7B-Base
Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split |
| -------- | ---------- | --------- | ----- |
| [qwen3-1.7b-base-q4_k_m.gguf](https://huggingface.co/Antigma/Qwen3-1.7B-Base-GGUF/blob/main/qwen3-1.7b-base-q4_k_m.gguf)|Q4_K_M|1.03 GB|False|
|[qwen3-1.7b-base-q4_0.gguf](https://huggingface.co/Antigma/Qwen3-1.7B-Base-GGUF/blob/main/qwen3-1.7b-base-q4_0.gguf)|Q4_0|0.98 GB|False|
|[qwen3-1.7b-base-q4_k_s.gguf](https://huggingface.co/Antigma/Qwen3-1.7B-Base-GGUF/blob/main/qwen3-1.7b-base-q4_k_s.gguf)|Q4_K_S|0.99 GB|False|

## Downloading using huggingface-cli
<details>
  <summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:

```
pip install -U "huggingface_hub[cli]"
```

Then, you can target the specific file you want:

```
huggingface-cli download https://huggingface.co/Antigma/Qwen3-1.7B-Base-GGUF --include "qwen3-1.7b-base-q4_k_m.gguf" --local-dir ./
```

If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:

```
huggingface-cli download https://huggingface.co/Antigma/Qwen3-1.7B-Base-GGUF --include "qwen3-1.7b-base-q4_k_m.gguf/*" --local-dir ./
```

You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./)

</details>