Quantization made by Richard Erkhov.
Storm-7B - GGUF
- Model creator: https://huggingface.co/jieliu/
- Original model: https://huggingface.co/jieliu/Storm-7B/
Name | Quant method | Size |
---|---|---|
Storm-7B.Q2_K.gguf | Q2_K | 2.53GB |
Storm-7B.IQ3_XS.gguf | IQ3_XS | 2.81GB |
Storm-7B.IQ3_S.gguf | IQ3_S | 2.96GB |
Storm-7B.Q3_K_S.gguf | Q3_K_S | 2.95GB |
Storm-7B.IQ3_M.gguf | IQ3_M | 3.06GB |
Storm-7B.Q3_K.gguf | Q3_K | 3.28GB |
Storm-7B.Q3_K_M.gguf | Q3_K_M | 3.28GB |
Storm-7B.Q3_K_L.gguf | Q3_K_L | 3.56GB |
Storm-7B.IQ4_XS.gguf | IQ4_XS | 3.67GB |
Storm-7B.Q4_0.gguf | Q4_0 | 3.83GB |
Storm-7B.IQ4_NL.gguf | IQ4_NL | 3.87GB |
Storm-7B.Q4_K_S.gguf | Q4_K_S | 3.86GB |
Storm-7B.Q4_K.gguf | Q4_K | 4.07GB |
Storm-7B.Q4_K_M.gguf | Q4_K_M | 4.07GB |
Storm-7B.Q4_1.gguf | Q4_1 | 4.24GB |
Storm-7B.Q5_0.gguf | Q5_0 | 4.65GB |
Storm-7B.Q5_K_S.gguf | Q5_K_S | 4.65GB |
Storm-7B.Q5_K.gguf | Q5_K | 4.78GB |
Storm-7B.Q5_K_M.gguf | Q5_K_M | 4.78GB |
Storm-7B.Q5_1.gguf | Q5_1 | 5.07GB |
Storm-7B.Q6_K.gguf | Q6_K | 5.53GB |
Storm-7B.Q8_0.gguf | Q8_0 | 7.17GB |
Original model description:
license: apache-2.0 library_name: transformers tags: - storm - mistral - openchat - RLAIF - reward model language: - en base_model: openchat/openchat-3.5-0106 datasets: - berkeley-nest/Nectar
Storm-7B
- Developed by: Jie Liu , Zhanhui Zhou , Jiaheng Liu , Xingyuan Bu , Chao Yang , Han-Sen Zhong , Wanli Ouyang .
- MMLab, The Chinese University of Hong Kong โ Shanghai AI Laboratory
- Paper: Iterative Length-Regularized Direct Preference Optimization: A Case Study on Improving 7B Language Models to GPT-4 Level
- Finetuned from the model: openchat-3.5-0106
- Dataset: berkeley-nest/Nectar
- Reward Model: Starling-RM-34B
Please see our paper for more details.
Introduction
We released Storm-7B, the first open-source language model comparable to the GPT-4 series on the AlpacaEval 2.0 leaderboard.
Recent studies show that DPO benefits from iterative training with online preferences labeled by a trained reward model. In this work, we identify a pitfall of vanilla iterative DPO - improved response quality can lead to increased verbosity. To address this, we introduce iterative length-regularized DPO (iLR-DPO) to penalize response length. Our empirical results show that iLR-DPO can enhance a 7B model to perform on par with GPT-4 without increasing verbosity.
Performance
Our 7B model achieves a 50.5% length-controlled win rate against GPT-4 Preview on AlpacaEval 2.0.
In addition to regular decoding, we also test beam search and best-of-n sampling on top of our trained model. Beam search over our trained model shows a 5% improvement over regular decoding, Best-of-n sampling with Starling-RM-34B achieves 61.6% LC Win rate and outperforms GPT-4 Omni.
We observe no significant degradation in traditional NLP tasks from the Huggingface Open LLM Leaderboard.
Uses
Our model uses the same chat template as Openchat-3.5-0106. A sample code snippet for inference using our model is provided below.
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
model = AutoModelForCausalLM.from_pretrained("jieliu/Storm-7B").to(device)
tokenizer = AutoTokenizer.from_pretrained("jieliu/Storm-7B")
model.eval().requires_grad_(False)
def generate_response(prompt):
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
outputs = model.generate(
input_ids,
max_length=2048,
do_sample=True,
temperature=1.0,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
response_ids = outputs[0]
response_text = tokenizer.decode(response_ids, skip_special_tokens=True)
return response_text
prompt = "How does a telescope work?"
input_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(input_prompt)
print("Response:", response_text)
Scripts
You can reproduce our results on AlphaEval 2.0 using the script provided below.
git clone https://github.com/tatsu-lab/alpaca_eval.git
cd alpaca_eval
pip install -e .
export OPENAI_API_KEY=<your_api_key>
alpaca_eval evaluate_from_model --model_configs 'Storm-7B'
Limitations
Our work has several limitations: (1) We focus on aligning with human preferences but only use GPT-4 as a proxy for human judgment to evaluate language models. (2) We reduce verbosity with a length penalty, though verbosity and length are not necessarily correlated. Future work could train a specific reward model to directly penalize verbosity, replacing the length margin with a verbosity margin, following the standard MODPO pipeline.
Citation
@article{liu2024iterative,
title = {Iterative Length-Regularized Direct Preference Optimization: A Case Study on Improving 7B Language Models to GPT-4 Level},
author = {Liu, Jie and Zhou, Zhanhui and Liu, Jiaheng and Bu, Xingyuan and Yang, Chao and Zhong Han-Sen and Ouyang, Wanli},
journal={arXiv preprint arXiv:2406.11817},
year={2024}
}
@article{zhou2023beyond,
title={Beyond one-preference-for-all: Multi-objective direct preference optimization},
author={Zhou, Zhanhui and Liu, Jie and Yang, Chao and Shao, Jing and Liu, Yu and Yue, Xiangyu and Ouyang, Wanli and Qiao, Yu},
journal={arXiv preprint arXiv:2310.03708},
year={2023}
}
- Downloads last month
- 335