Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
tpo-alignment
/
Instruct-Llama-3-8B-TPO-y3
like
0
Follow
TPO
4
Safetensors
princeton-nlp/llama3-ultrafeedback-armorm
llama
alignment-handbook
Generated from Trainer
arxiv:
2405.16681
License:
mit
Model card
Files
Files and versions
Community
Train
main
Instruct-Llama-3-8B-TPO-y3
Commit History
Update README.md
a0b57e1
verified
sahsaeedi
commited on
5 days ago
Upload tokenizer
7343c58
verified
sahsaeedi
commited on
Jan 23
Update config.json
c5beb43
verified
sahsaeedi
commited on
Jan 23
Upload LlamaForCausalLM
3b76385
verified
sahsaeedi
commited on
Jan 23
initial commit
5500a54
verified
sahsaeedi
commited on
Jan 23