Decensored using a custom training script guided by activations, similar to ablation/"abliteration" scripts but not exactly the same approach.

The training script is released under the MIT license: https://github.com/nkpz/DeLMAT

This was kind of tough to tune, but it resulted in some useful features in my training script, like an additional loss from "ground truth" answers to help it retain its original capabilities.

It might sometimes have formatting issues in its responses, but usually works well enough. Not perfect, but this is as far as I got before I felt like moving on 🤷‍♂️

Reverb doesn't claim to be a reasoning model, but in my experiences I've found that it has an introspective and analytical approach to answering prompts. Impressive for a new 7B model!

Downloads last month
5
Safetensors
Model size
7.62B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for nkpz/Reverb-7b-Uncensored-DeLMAT

Finetuned
(2)
this model
Quantizations
2 models