GuardReasoner: Towards Reasoning-based LLM Safeguards Paper • 2501.18492 • Published 24 days ago • 81
Safeguard Fine-Tuned LLMs Through Pre- and Post-Tuning Model Merging Paper • 2412.19512 • Published Dec 27, 2024 • 8
Course-Correction: Safety Alignment Using Synthetic Preferences Paper • 2407.16637 • Published Jul 23, 2024 • 26
Refusal in Language Models Is Mediated by a Single Direction Paper • 2406.11717 • Published Jun 17, 2024 • 2