Quantized using 200 samples of 8192 tokens from an RP-oriented PIPPA dataset.

Branches:

  • main -- measurement.json
  • 2.25b6h -- 2.25bpw, 6bit lm_head
  • 3.7b6h -- 3.7bpw, 6bit lm_head
  • 6b6h -- 6bpw, 6bit lm_head

Requires ExllamaV2 version 0.0.12 and up.

Original model link: Sao10K/Solstice-Mixtral-v1

Original model README below.


MIMI

GGUF: https://huggingface.co/Sao10K/Solstice-Mixtral-v1-GGUF

Solstice-11B-v1 but on Mixtral. More info there.

Experimental. May or may not be good, Mixtral training is... difficult to work with.

Trained with Vicuna / ShareGPT Format, but Alpaca Instruct should work fine too.


As per usual, handles itself fine in NSFW Scenarios, after all, it is trained in lewd outputs. A bit of a weird behaviour where it is reluctant in zero-shot settings, but in actual roleplays / usage? It's fine.

Pretty nice. Using Vicuna gave slightly better outputs than Alpaca, but it may be a minor difference?

I like that it stays in character.

I like using Universal-Light preset in SillyTavern.


I really appreciate your feedback / supportive comments. They keep me going.


Support me here :)

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for rAIfle/Solstice-Mixtral-v1-exl2-rpcal

Finetuned
(58)
this model

Dataset used to train rAIfle/Solstice-Mixtral-v1-exl2-rpcal