DanteLLM

DanteLLM is a Large Language Model developed in Sapienza lab. In October 2023 we submitted a paper called DanteLLM: Let's Push Italian LLM Research Forward! ๐ŸคŒ ๐Ÿ‡ฎ๐Ÿ‡น

That paper got accepted with the scores 5, 4, 4 out of 5

How to run the model (Ollama)

This repo contains the model in GGUF format. You can run DanteLLM on Ollama following these steps:

Make sure you have Ollama correctly installed and ready to use.

Then, you can download DanteLLM's weights using:

huggingface-cli download rstless-research/DanteLLM-7B-Instruct-Italian-v0.1-GGUF dantellm-merged-hf.q8_0.gguf Modelfile --local-dir . --local-dir-use-symlinks False

Load the model using:

ollama create dante -f Modelfile

Finally, to run the model, use:

ollama run dante

Authors

  • Andrea Bacciu* (work done prior joining Amazon)
  • Cesare Campagnano*
  • Giovanni Trappolini
  • Prof. Fabrizio Silvestri

* Equal contribution

Downloads last month
31
GGUF
Model size
7.24B params
Architecture
llama

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for rstless-research/DanteLLM-7B-Instruct-Italian-v0.1-GGUF

Adapter
(906)
this model