Bitsandbytes quantization of https://huggingface.co/Qwen/Qwen2.5-14B-Instruct-1M.

See https://huggingface.co/blog/4bit-transformers-bitsandbytes for instructions.

from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import BitsAndBytesConfig
import torch

# Define the 4-bit configuration
nf4_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_use_double_quant=True,
    bnb_4bit_compute_dtype=torch.bfloat16
)

# Load the pre-trained model with the 4-bit quantization configuration
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-14B-Instruct-1M", quantization_config=nf4_config)

# Load the tokenizer associated with the model
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-14B-Instruct-1M")

# Push the model and tokenizer to the Hugging Face hub
model.push_to_hub("onekq-ai/Qwen2.5-14B-Instruct-1M-bnb-4bit", use_auth_token=True)
tokenizer.push_to_hub("onekq-ai/Qwen2.5-14B-Instruct-1M-bnb-4bit", use_auth_token=True)
Downloads last month
27
Safetensors
Model size
8.37B params
Tensor type
F32
FP16
U8
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for onekq-ai/Qwen2.5-14B-Instruct-1M-bnb-4bit

Base model

Qwen/Qwen2.5-14B
Quantized
(47)
this model