File size: 1,007 Bytes
1b55dd9 dda2169 1b55dd9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- llama
- dpo
- preference-optimization
- PEFT
- instruction-tuning
pipeline_tag: text-generation
---
# DPO Fine-Tuned Adapter - LLM Judge Dataset
## 🧠 Model
- Base: `meta-llama/Llama-3.2-1B-Instruct`
- Fine-tuned using TRL's `DPOTrainer` with the LLM Judge preference dataset (50 pairs)
## ⚙️ Training Parameters
| Parameter | Value |
|-----------------------|---------------|
| Learning Rate | 5e-5 |
| Batch Size | 4 |
| Epochs | 3 |
| Beta (DPO regularizer)| 0.1 |
| Max Input Length | 1024 tokens |
| Max Prompt Length | 512 tokens |
| Padding Token | `eos_token` |
## 📦 Dataset
- Source: `llm_judge_preferences.csv`
- Size: 50 human-labeled pairs with `prompt`, `chosen`, and `rejected` columns
## 📂 Output
- Adapter saved and uploaded as `Likhith003/dpo-llmjudge-lora-adapter`
|