🚀 Phi-Hybrid-1.5B: Merging Efficiency & Power
📌 Overview
Phi-Hybrid-1.5B is an experimental hybrid language model that merges the capabilities of Phi-1 and Phi-1.5 from Microsoft. Built using MergeKit, this model is designed to enhance performance while maintaining efficiency, making it a powerful tool for text generation.
🔗 Created by: [Matteo Khan ]
🎓 Affiliation: Apprentice at TW3 Partners (Generative AI Research)
📍 License: MIT
🔗 Connect with me on LinkedIn
🔍 Model on Hugging Face
🧠 Model Details
- Model Type: Hybrid Language Model (Merged)
- Parent Models:
- Merging Technique: Linear Merge (MergeKit)
🎯 Intended Use
This model is intended for research and experimentation in hybrid model optimization. Potential use cases include:
- ✅ Text Generation
- ✅ Conversational AI
- ✅ Code Assistance
- ✅ Creative Writing
- ✅ Exploration of Model Merging Effects
⚠️ Limitations & Considerations
While Phi-Hybrid-1.5B enhances certain capabilities, it also inherits limitations from its parent models:
- ❌ May generate inaccurate or misleading information
- ⚠️ Potential for biased, offensive, or harmful content
- 🔄 Merging may introduce unpredictable behaviors
- 📉 Performance may vary across different tasks
🔬 Merging Process & Configuration
This is not a newly trained model, but rather a merge of existing models using the following configuration:
merge_method: linear
dtype: float16
models:
- model: "microsoft/phi-1"
parameters:
t: 1.0
weight: 0.6
- model: "microsoft/phi-1_5"
parameters:
t: 1.0
weight: 0.4
parameters:
normalize: true
int8_mask: false
layers:
- pattern: "model.*"
📊 No formal evaluation has been conducted yet. Users are encouraged to benchmark and share feedback!
🌍 Environmental Impact
By utilizing model merging rather than training from scratch, Phi-Hybrid-1.5B significantly reduces computational and environmental costs.
🚀 How to Use
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "YourProfile/Phi-Hybrid-1.5B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Explain the theory of relativity in simple terms."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
📜 Citation & References
If you use Phi-Hybrid-1.5B in your research, please cite the parent models:
📝 Phi-1
@misc{phione,
title={Phi-1: A Small-Scale Language Model for Reasoning},
author={Microsoft Research},
year={2023},
url={https://huggingface.co/microsoft/phi-1}
}
📝 Phi-1.5
📩 Feedback & Contact: Reach out via Hugging Face.
🎉 Happy Experimenting! 🚀
- Downloads last month
- 6
Model tree for MatteoKhan/phi-1-1.5-merged
Base model
microsoft/phi-1