Pre-trained model fine-tuned using Reinforcement Learning on DIALOCONAN dataset using facebook/roberta-hate-speech-dynabench-r4-target as reward model.

Toxicity results on allenai/real-toxicity-prompts dataset using custom prompts (see πŸ₯žRewardLM for details).

Toxicity Level RedPajama-INCITE-Chat-3B
Pre-Trained 0.217
Fine-Tuned 0.129
RL (this) 0.160
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Spaces using DanielSc4/RedPajama-INCITE-Chat-3B-v1-RL-LoRA-8bit-test1 22