Resources for EMNLP 2024 Paper: Calibrating LLMs with Preference Optimization on Thought Trees for Generating Rationale in Science Question Scoring
J Li
jiazhengli
·
AI & ML interests
None yet
Recent Activity
upvoted
an
article
3 days ago
大模型偏好优化技术:DPO及其变种
upvoted
a
paper
about 1 month ago
Sigma: Differential Rescaling of Query, Key and Value for Efficient
Language Models
liked
a model
4 months ago
tencent/Tencent-Hunyuan-Large
Organizations
None yet
Collections
3
Resources for EMNLP 2024 Paper: Eliminating Biased Length Reliance of Direct Preference Optimization via Down-Sampled KL Divergence
-
jiazhengli/Pythia-2.8B-HH-RLHF-Iterative-SamPO
Text Generation • Updated • 60 -
jiazhengli/Pythia-2.8B-TLDR-Iterative-SamPO
Text Generation • Updated • 65 -
Junrulu/Llama-3-8B-Instruct-Iterative-SamPO
Text Generation • Updated • 9 • 1 -
Eliminating Biased Length Reliance of Direct Preference Optimization via Down-Sampled KL Divergence
Paper • 2406.10957 • Published • 1
models
8
jiazhengli/long-t5-tglobal-large-AERA
Text2Text Generation
•
Updated
•
120
jiazhengli/Mixtral-8x7B-Instruct-v0.1-QLoRA-Assessment-Rationale-dpo
Updated
•
5
jiazhengli/Mixtral-8x7B-Instruct-v0.1-QLoRA-Assessment-Rationale-sft
Updated
•
1
jiazhengli/Meta-Llama-3-8B-QLoRA-Assessment-Rationale-sft
Updated
•
3
jiazhengli/Meta-Llama-3-8B-QLoRA-Assessment-Rationale-dpo
Updated
•
2
•
1
jiazhengli/deberta-v3-large-Rationale-to-Score
Text Classification
•
Updated
•
111
•
1
jiazhengli/Pythia-2.8B-TLDR-Iterative-SamPO
Text Generation
•
Updated
•
65
jiazhengli/Pythia-2.8B-HH-RLHF-Iterative-SamPO
Text Generation
•
Updated
•
60