metadata
base_model:
- summykai/gemma3-27b-abliterated-dpo
base_model_relation: quantized
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: gemma
language:
- en
datasets:
- mlabonne/orpo-dpo-mix-40k
Quantized using the default exllamav3 (0.0.3) quantization process.
- Original model: https://huggingface.co/summykai/gemma3-27b-abliterated-dpo
- exllamav3: https://github.com/turboderp-org/exllamav3
Uploaded finetuned model
- Developed by: Summykai
- License: apache-2.0
- Finetuned from model : mlabonne/gemma-3-27b-it-abliterated
This gemma3 model was trained 2x faster with Unsloth and Huggingface's TRL library.