bobig/Qwen2.5-Coder-1.5B-Instruct-Q6

This works well as a draft model for speculative decoding in LMstudio 3.10 beta

Try it with: mlx-community/Qwen2.5-14B-1M-YOYO-V2-Q4

you should see about 50% faster TPS for math/code prompts. For a quick test try: "count backwards from 100 to 1"

Q4 was a little too dumb, Q8 was a little too slow...so Q6

The Model bobig/Qwen2.5-Coder-1.5B-Instruct-Q6 was converted to MLX format from Qwen/Qwen2.5-Coder-1.5B-Instruct using mlx-lm version 0.21.4.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("bobig/Qwen2.5-Coder-1.5B-Instruct-Q6")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
35
Safetensors
Model size
338M params
Tensor type
BF16
·
U32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for mlx-community/Qwen2.5-Coder-1.5B-Instruct-Q6

Base model

Qwen/Qwen2.5-1.5B
Quantized
(68)
this model