File size: 752 Bytes
a305e82
dde431f
 
 
 
 
a305e82
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
---
base_model:
  - teknium/OpenHermes-2.5-Mistral-7B
  - Intel/neural-chat-7b-v3-3
  - meta-math/MetaMath-Mistral-7B
  - openchat/openchat-3.5-1210
license: apache-2.0
---

This model is intended to be a strong base suitable for downstream fine-tuning on a variety of tasks. Based on our internal evaluations, we believe it's one of the strongest models for most down-stream tasks. You can read more about our development and evaluation process [here](https://openpipe.ai/blog/mistral-7b-fine-tune-optimized).

It is a hierarchichal SLERP merge of teknium/OpenHermes-2.5-Mistral-7B, Intel/neural-chat-7b-v3-3, meta-math/MetaMath-Mistral-7B, and openchat/openchat-3.5-1210. berkeley-nest/Starling-LM-7B-alpha was omitted from this version of the model.