mlx-community/Phi-3-vision-128k-instruct-4bit

This model was converted to MLX format from microsoft/Phi-3-vision-128k-instruct using mlx-vlm version 0.0.10. Refer to the original model card for more details on the model.

Use with mlx

pip install -U mlx-vlm
python -m mlx_vlm.generate --model mlx-community/Phi-3-vision-128k-instruct-4bit --max-tokens 100 --temp 0.0
Downloads last month
85
Safetensors
Model size
649M params
Tensor type
FP16
·
U32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support text-generation models for mlx library.