Mistral 7B instruct finetuned to produce script for axflix
Load model directly
from mistral_inference.model import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file("tokenizer.model.v3") # change to extracted tokenizer file
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3") # change to extracted model dir
model.load_lora("lora.safetensors")
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for flopsy1/mistral-7B-arXflix
Base model
mistralai/Mistral-7B-v0.3
Finetuned
mistralai/Mistral-7B-Instruct-v0.3