Requesting Quants for Best Medical Model Under 15B - Baichuan-M1-14B-Instruct

#707
by grande-gram - opened

The BaichuanM1ForCausalLM architecture used by those models is unfortunately not currently supported by llama.cpp and so it is not possible for us or anyone else to provide GGUF quants for it until support for this new architecture is implemented.

mradermacher changed discussion status to closed

Sign up or log in to comment