metadata
tags:
- not-for-all-audiences
daybreak-kunoichi-2dpo-7b - EXL2 8bpw
This is a 8bpw EXL2 quant of crestf411/daybreak-kunoichi-2dpo-7b
This quant was made using exllamav2-0.0.21 with default dataset.
Context limit seems to be 8k (in webui it shows 16k by default but it gets incoherent past 8k, use alpha_value in webui to scale to 16k).
I tested this quant shortly in some random RPs (including one over 8k with alpha_value in webui) and it seems to work fine.
Prompt Templates
This model seems to use Alpaca format.
Original readme below
Experimental model doing a DPO training on top of Kunoichi-DPO-v2-7b, i.e. double-DPO.
Not suitable for any audience.