Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
tpo-alignment
/
Mistral-Instruct-7B-TPO-y3
like
0
Follow
TPO
4
Safetensors
princeton-nlp/mistral-instruct-ultrafeedback
mistral
alignment-handbook
Generated from Trainer
arxiv:
2405.16681
License:
mit
Model card
Files
Files and versions
Community
Train
main
Mistral-Instruct-7B-TPO-y3
/
tokenizer.json
sahsaeedi
Upload tokenizer
b375482
verified
about 1 month ago
raw
Copy download link
history
contribute
delete
Safe
3.51 MB
File too large to display, you can
check the raw version
instead.