A simple unalignment fine-tune on ~900k tokens aiming to make the model more compliant and willing to handle user requests.

This is the same unalignment training seen in concedo/Beepo-22B, so big thanks to concedo for the dataset.

Chat template is same as the original, ChatML.

Downloads last month
19
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for ReadyArt/Qwen2.5-14B-Instruct-1M-Unalign_EXL2_8.0bpw_H8

Base model

Qwen/Qwen2.5-14B
Quantized
(47)
this model

Collection including ReadyArt/Qwen2.5-14B-Instruct-1M-Unalign_EXL2_8.0bpw_H8