ggml_llava-v1.5-7b
This repo contains GGUF files to inference llava-v1.5-7b with llama.cpp end-to-end without any extra dependency.
Note: The mmproj-model-f16.gguf
file structure is experimental and may change. Always use the latest code in llama.cpp.
- Downloads last month
- 2,251
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.