Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
tpo-alignment
/
Instruct-Llama-3-8B-TPO-y2
like
0
Follow
TPO
4
Safetensors
princeton-nlp/llama3-ultrafeedback-armorm
llama
alignment-handbook
Generated from Trainer
arxiv:
2405.16681
License:
mit
Model card
Files
Files and versions
Community
Train
main
Instruct-Llama-3-8B-TPO-y2
Commit History
Update README.md
300bbf3
verified
sahsaeedi
commited on
5 days ago
Update config.json
9a21b68
verified
sahsaeedi
commited on
Jan 23
Upload LlamaForCausalLM
cf5804b
verified
sahsaeedi
commited on
Jan 23
initial commit
949c25f
verified
sahsaeedi
commited on
Jan 23