This is an imatrix gguf conversion of xtuner/llava-llama-3-8b-v1_1-transformers.

Mainly intended to be used as the text encoder for Hunyuan Video, but possible to use for vision tasks with the mmproj file from the xtuner gguf repository.

The imatrix dataset used was calibration_datav3.txt by Bartowski, which was used for all quants under Q6_K. Tested against wikitext / no imatrix and it outperformed both.

Note that the vocab_size is different between the transformers (128 320) and the hf (128 256) repositories. This used the former as it was what was used in the official Hunyuan Video code.

IQ quants will be slow in ComfyUI due to using numpy fallback.

Downloads last month
9,758
GGUF
Model size
8.03B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support image-text-to-text models for gguf library.

Model tree for city96/llava-llama-3-8b-v1_1-imat-gguf

Quantized
(1)
this model

Collection including city96/llava-llama-3-8b-v1_1-imat-gguf