This model was converted to MLX format and quantized Q4, from cognitivecomputations/Dolphin3.0-R1-Mistral-24B using mlx-lm version 0.4.0. Refer to the original model card for more details on the model.
Use with mlx . pip install mlx-lm . from mlx_lm import load, generate
model, tokenizer = load("jesusoctavioas/Dolphin3.0-R1-Mistral-24B-MLX-Q4") response = generate(model, tokenizer, prompt="hello", verbose=True)
Original model link https://huggingface.co/cognitivecomputations/Dolphin3.0-R1-Mistral-24B/tree/main .
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for jesusoctavioas/Dolphin3.0-R1-Mistral-24B-MLX-Q4
Base model
mistralai/Mistral-Small-24B-Base-2501