Model description

  • Developed by: teleboas
  • License: apache-2.0
  • Finetuned from model : unsloth/mistral-7b-v0.2-bnb-4bit

This is the GGUF version of the model, made for the llama.cpp inference engine.

If you are looking for the transformers/fp16 model, it is available here: https://huggingface.co/teleboas/alpaca_mistral-7b-v0.2

Downloads last month
14
GGUF
Model size
7.24B params
Architecture
llama

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for teleboas/alpaca_mistral-7b-v0.2-GGUF

Quantized
(25)
this model

Dataset used to train teleboas/alpaca_mistral-7b-v0.2-GGUF