Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ tags:
|
|
14 |
*Antigma's GitHub Homepage [https://github.com/AntigmaLabs](https://github.com/AntigmaLabs)*
|
15 |
|
16 |
## llama.cpp quantization
|
17 |
-
Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5215">
|
18 |
Original model: https://huggingface.co/Qwen/Qwen3-1.7B-Base
|
19 |
Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project
|
20 |
## Prompt format
|
|
|
14 |
*Antigma's GitHub Homepage [https://github.com/AntigmaLabs](https://github.com/AntigmaLabs)*
|
15 |
|
16 |
## llama.cpp quantization
|
17 |
+
Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5215">b5215</a> for quantization.
|
18 |
Original model: https://huggingface.co/Qwen/Qwen3-1.7B-Base
|
19 |
Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project
|
20 |
## Prompt format
|