NeuralPipe-7B-slerp
This is a merge of pre-trained language models created using LazyMergekit, combining the capabilities of OpenPipe's optimized Mistral and NeuralHermes through an efficient SLERP fusion.
About Me
I'm David Soeiro-Vuong, a third-year Computer Science student working as an apprentice at TW3 Partners, a company specialized in Generative AI. Passionate about artificial intelligence and language models optimization, I focus on creating efficient model merges that balance performance and capabilities.
๐ Connect with me on LinkedIn
Merge Details
Merge Method
This model uses SLERP (Spherical Linear Interpolation) with carefully tuned parameters:
- Optimized attention layer fusion patterns
- Balanced MLP layer transitions
- bfloat16 format for efficient memory usage
- Full layer utilization for maximum capability retention
Models Merged
Configuration
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
- Downloads last month
- 8
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for Davidsv/SUONG-3
Merge model
this model