GGUF Naming Convention
#3
by
stevugnin
- opened
Could you please rename the files in the model with the following convention?DeepHermes-3-Llama-3-8B.{quantization}
Ollama needs the GGUF files in a repository to respect this convention to be able to download them.
I need a reference
teknium
changed discussion status to
closed
I'm able to download, but it seems choosing a quantization is not possible right now.
Maybe there's something in bartowski's llama 3.2 you can use: https://huggingface.co/bartowski/Llama-3.2-1B-Instruct-GGUF/tree/main . It is possible to choose the tag there: