Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

LoRA version of ConicCat/Mistral-Small-3.2-AntiRep-24B

You can use the GGUF with --lora Mistral-Small-3.2-AntiRep-24B-F16-LoRA.gguf on lcpp or by setting the text LoRA in kobold cpp.

Should work on other MS3.2 based finetunes, no guarantees though.

Downloads last month
74
GGUF
Model size
185M params
Architecture
llama
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ConicCat/Mistral-Small-3.2-AntiRep-24B-LoRA