This model is a quantized version of PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct and is converted to the OpenVINO format. This model was obtained via the nncf-quantization space with optimum-intel.

First make sure you have optimum-intel installed:

pip install optimum[openvino]

To load your model you can do as follows:

from optimum.intel import OVModelForCausalLM

model_id = "elucidator8918/Llama-3-Patronus-Lynx-8B-Instruct-openvino-4bit"
model = OVModelForCausalLM.from_pretrained(model_id)
Downloads last month
8
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for elucidator8918/Lynx-8B-Instruct-4bit

Finetuned
(5)
this model