Text Generation
GGUF
English
mixture of experts
Mixture of Experts
8x3B
Llama 3.2 MOE
128k context
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
bfloat16
swearing
rp
horror
mergekit
Inference Endpoints
conversational
Story
#2
by
Noose1
- opened
Story writing
Can we fine-tune it with mlx_lm.lora?
it is giving this error, and model repo dont seems to have a config file
Hi
You need the source files, here:
https://huggingface.co/DavidAU/Llama-3.2-8X3B-MOE-Dark-Champion-Instruct-uncensored-abliterated-18.4B
Thanks David..
Can you fine-tune it on custom data? even with 64gb of RAM, it is not starting
Sorry, I can not help here - please contact the software/app provider about fine tuning.
I don't tune MOEs ; just the models that go into them (on a case by case basis - otherwise I use merge/"dna" swap techniques)
Tuning "moes" is a bit more involved than single model.
im new to this, is there anyone providing api access for this?