Model Card for Model ID

dragon-yi-answer-tool is a quantized version of DRAGON Yi 6B, with 4_K_M GGUF quantization, providing a fast, small inference implementation for use on CPUs.

dragon-yi-6b is a fact-based question-answering model, optimized for complex business documents.

To pull the model via API:

from huggingface_hub import snapshot_download           
snapshot_download("llmware/dragon-yi-answer-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)  

Load in your favorite GGUF inference engine, or try with llmware as follows:

from llmware.models import ModelCatalog  
model = ModelCatalog().load_model("dragon-yi-answer-tool")            
response = model.inference(query, add_context=text_sample)  

Note: please review config.json in the repository for prompt wrapping information, details on the model, and full test set.

Model Description

  • Developed by: llmware
  • Model type: GGUF
  • Language(s) (NLP): English
  • License: Yi Community License
  • Quantized from model: llmware/dragon-yi

Model Card Contact

Darren Oberst & llmware team

Downloads last month
45
GGUF
Model size
6.06B params
Architecture
llama
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Collection including llmware/dragon-yi-answer-tool