Qwen3-4B-Thinking-2507-GGUF
Static quants of Qwen/Qwen3-4B-Thinking-2507
.
Quants
Link | URI | Quant | Size |
---|---|---|---|
GGUF | hf:giladgd/Qwen3-4B-Thinking-2507-GGUF:Q2_K |
Q2_K | 1.7GB |
GGUF | hf:giladgd/Qwen3-4B-Thinking-2507-GGUF:Q3_K_S |
Q3_K_S | 1.9GB |
GGUF | hf:giladgd/Qwen3-4B-Thinking-2507-GGUF:Q3_K_M |
Q3_K_M | 2.1GB |
GGUF | hf:giladgd/Qwen3-4B-Thinking-2507-GGUF:Q3_K_L |
Q3_K_L | 2.2GB |
GGUF | hf:giladgd/Qwen3-4B-Thinking-2507-GGUF:Q4_0 |
Q4_0 | 2.4GB |
GGUF | hf:giladgd/Qwen3-4B-Thinking-2507-GGUF:Q4_K_S |
Q4_K_S | 2.4GB |
GGUF | hf:giladgd/Qwen3-4B-Thinking-2507-GGUF:Q4_K_M |
Q4_K_M | 2.5GB |
GGUF | hf:giladgd/Qwen3-4B-Thinking-2507-GGUF:Q5_0 |
Q5_0 | 2.8GB |
GGUF | hf:giladgd/Qwen3-4B-Thinking-2507-GGUF:Q5_K_S |
Q5_K_S | 2.8GB |
GGUF | hf:giladgd/Qwen3-4B-Thinking-2507-GGUF:Q5_K_M |
Q5_K_M | 2.9GB |
GGUF | hf:giladgd/Qwen3-4B-Thinking-2507-GGUF:Q6_K |
Q6_K | 3.3GB |
GGUF | hf:giladgd/Qwen3-4B-Thinking-2507-GGUF:Q8_0 |
Q8_0 | 4.3GB |
GGUF | hf:giladgd/Qwen3-4B-Thinking-2507-GGUF:F16 |
F16 | 8.1GB |
Download a quant using
node-llama-cpp
(more info):npx -y node-llama-cpp pull <URI>
Usage
Use with node-llama-cpp
(recommended)
Ensure you have node.js installed:
brew install nodejs
CLI
Chat with the model:
npx -y node-llama-cpp chat hf:giladgd/Qwen3-4B-Thinking-2507-GGUF:Q4_K_M
Code
Use it in your project:
npm install node-llama-cpp
import {getLlama, resolveModelFile, LlamaChatSession} from "node-llama-cpp";
const modelUri = "hf:giladgd/Qwen3-4B-Thinking-2507-GGUF:Q4_K_M";
const llama = await getLlama();
const model = await llama.loadModel({
modelPath: await resolveModelFile(modelUri)
});
const context = await model.createContext();
const session = new LlamaChatSession({
contextSequence: context.getSequence()
});
const q1 = "Hi there, how are you?";
console.log("User: " + q1);
const a1 = await session.prompt(q1);
console.log("AI: " + a1);
Read the getting started guide to quickly scaffold a new
node-llama-cpp
project
Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
brew install llama.cpp
CLI
llama-cli -hf giladgd/Qwen3-4B-Thinking-2507-GGUF:Q4_K_M -p "The meaning to life and the universe is"
Server
llama-server -hf giladgd/Qwen3-4B-Thinking-2507-GGUF:Q4_K_M -c 2048
- Downloads last month
- 1,228
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for giladgd/Qwen3-4B-Thinking-2507-GGUF
Base model
Qwen/Qwen3-4B-Thinking-2507