This model is a quantized version of TinyLlama/TinyLlama-1.1B-Chat-v1.0
and was exported to the OpenVINO format using optimum-intel via the nncf-quantization space.
First make sure you have optimum-intel installed:
pip install optimum[openvino]
To load your model you can do as follows:
from optimum.intel import OVModelForCausalLM
model_id = "NikolayL/TinyLlama-1.1B-Chat-v1.0-openvino-int4"
model = OVModelForCausalLM.from_pretrained(model_id)
- Downloads last month
- 10
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for NikolayL/TinyLlama-1.1B-Chat-v1.0-openvino-int4
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0