metadata
license: apache-2.0
base_model:
- anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only
pipeline_tag: text-generation
zletpm/Mistral-Small-3.2-24B-Instruct-2506-Text-Only-MLX-4.5bit
This model zletpm/Mistral-Small-3.2-24B-Instruct-2506-Text-Only-MLX-4.5bit was converted to MLX format from anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only using mlx-lm version 0.25.2.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("zletpm/Mistral-Small-3.2-24B-Instruct-2506-Text-Only-MLX-4.5bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)