AWQ 4 bits quantization from DeepSeek-R1-Distill-Qwen-32B commit 10d6a0388c80991c8fd8b54223146e7cbe33dfa5

from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name = "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B"
commit_hash = "10d6a0388c80991c8fd8b54223146e7cbe33dfa5"
# Download the model and tokenizer at the specific commit hash
model = AutoAWQForCausalLM.from_pretrained(model_name, revision=commit_hash)
tokenizer = AutoTokenizer.from_pretrained(model_name, revision=commit_hash)
quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" }
model.quantize(tokenizer, quant_config=quant_config)
Downloads last month
33
Safetensors
Model size
5.73B params
Tensor type
I32
·
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for MPWARE/DeepSeek-R1-Distill-Qwen-32B-AWQ-4bits-GEMM

Quantized
(116)
this model

Collection including MPWARE/DeepSeek-R1-Distill-Qwen-32B-AWQ-4bits-GEMM