Can you upload your model to Ollama? I hope to use your model to run RAGflow. However, if I need to run it locally, it must be compatible with Ollama.
#19
by
shaddock
- opened
Can you upload your model to Ollama? I hope to use your model to run RAGflow. However, if I need to run it locally, it must be compatible with Ollama.
Hello @shaddock , thank you for your interest in using our models! Currently, our models are not compatible with the Ollama. Here's why:
According to the Ollama Modelfile documentation, there are three primary ways to define a model:
- Building from an existing model
- Our models are not derived from any of the existing models supported by Ollama.
- Building from a Safetensors model of a supported architecture
- Ollama currently supports Safetensors models for the following architectures:
- Llama (including Llama 2, Llama 3, Llama 3.1, and Llama 3.2)
- Mistral (including Mistral 1, Mistral 2, and Mixtral)
- Gemma (including Gemma 1 and Gemma 2)
- Phi3
- Our models do not conform to any of these supported architectures.
- Ollama currently supports Safetensors models for the following architectures:
- Building from a GGUF file
- Ollama allows defining models using GGUF files, but we have not yet provided our models in this format.
At the moment, we do not have immediate plans to support Ollama. If you have any ideas or specific requirements that you believe could enhance the compatibility or functionality of our models with Ollama, please feel free to share them with us.
MiniMax-AI
changed discussion status to
closed