This repository is a fork of the original almatkai/ingredientExtractor-Mistral-7b, with custom GGUF quantizations, specifically tailored for NeurochainAI's inference network. The models provided here are a fundamental part of NeurochainAI's state-of-the-art AI inference solutions.

NeurochainAI leverages these models to optimize and run inference across distributed networks, enabling efficient and robust language model processing across various platforms and devices.

Additionally, this repository includes customizations of LoRA adapters specifically developed for Darkfrontiers and ImaginaryOnes game chatbots, enhancing AI interactions within these gaming environments.

Downloads last month
27
GGUF
Model size
7.24B params
Architecture
llama

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for neurochainai/ingredient-extractor-mistral-7b-instruct-v0.1

Quantized
(18)
this model