metadata
license: apache-2.0
datasets:
- open-r1/OpenR1-Math-220k
- yentinglin/s1K-1.1-trl-format
- simplescaling/s1K-1.1
language:
- en
metrics:
- accuracy
base_model: yentinglin/Mistral-Small-24B-Instruct-2501-reasoning
pipeline_tag: text-generation
tags:
- reasoning
- mlx
- mlx-my-repo
model-index:
- name: yentinglin/Mistral-Small-24B-Instruct-2501-reasoning
results:
- task:
type: text-generation
dataset:
name: MATH-500
type: MATH
metrics:
- type: pass@1
value: 0.95
name: pass@1
verified: false
source:
url: >-
https://huggingface.co/spaces/yentinglin/zhtw-reasoning-eval-leaderboard
name: yentinglin/zhtw-reasoning-eval-leaderboard
- task:
type: text-generation
dataset:
name: AIME 2025
type: AIME
metrics:
- type: pass@1
value: 0.5333
name: pass@1
verified: false
- type: pass@1
value: 0.6667
name: pass@1
verified: false
source:
url: >-
https://huggingface.co/spaces/yentinglin/zhtw-reasoning-eval-leaderboard
name: yentinglin/zhtw-reasoning-eval-leaderboard
- task:
type: text-generation
dataset:
name: GPQA Diamond
type: GPQA
metrics:
- type: pass@1
value: 0.62022
name: pass@1
verified: false
source:
url: >-
https://huggingface.co/spaces/yentinglin/zhtw-reasoning-eval-leaderboard
name: yentinglin/zhtw-reasoning-eval-leaderboard
johnjadensmith112/Mistral-Small-24B-Instruct-2501-reasoning-Q4-mlx
The Model johnjadensmith112/Mistral-Small-24B-Instruct-2501-reasoning-Q4-mlx was converted to MLX format from yentinglin/Mistral-Small-24B-Instruct-2501-reasoning using mlx-lm version 0.20.5.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("johnjadensmith112/Mistral-Small-24B-Instruct-2501-reasoning-Q4-mlx")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)