mlx-community/Qwen2.5-VL-7B-Instruct-8bit

This model was converted to MLX format from Qwen/Qwen2.5-VL-7B-Instruct using mlx-vlm version 0.1.11. Refer to the original model card for more details on the model.

Use with mlx

pip install -U mlx-vlm
python -m mlx_vlm.generate --model mlx-community/Qwen2.5-VL-7B-Instruct-8bit --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image>
Downloads last month
2,443
Safetensors
Model size
2.43B params
Tensor type
FP16
ยท
U32
ยท
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for mlx-community/Qwen2.5-VL-7B-Instruct-8bit

Finetuned
(45)
this model

Space using mlx-community/Qwen2.5-VL-7B-Instruct-8bit 1

Collection including mlx-community/Qwen2.5-VL-7B-Instruct-8bit