File size: 7,240 Bytes
b4740c6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
#!/usr/bin/env python3
"""
Script to upload Multi-Model Orchestrator to Hugging Face Hub
"""

import os
import shutil
from pathlib import Path
from huggingface_hub import HfApi, create_repo, upload_folder
import subprocess

def build_package():
    """Build the Python package"""
    print("🔨 Building package...")
    
    # Clean previous builds
    if os.path.exists("build"):
        shutil.rmtree("build")
    if os.path.exists("dist"):
        shutil.rmtree("dist")
    if os.path.exists("*.egg-info"):
        for egg_info in Path(".").glob("*.egg-info"):
            shutil.rmtree(egg_info)
    
    # Build package
    subprocess.run([sys.executable, "setup.py", "sdist", "bdist_wheel"], check=True)
    print("✅ Package built successfully!")

def upload_to_huggingface():
    """Upload the package to Hugging Face Hub"""
    print("🚀 Uploading to Hugging Face Hub...")
    
    # Initialize Hugging Face API
    api = HfApi()
    
    # Repository name
    repo_name = "kunaliitkgp09/multi-model-orchestrator"
    
    try:
        # Create repository if it doesn't exist
        create_repo(repo_name, exist_ok=True)
        print(f"✅ Repository {repo_name} ready")
        
        # Upload all files
        upload_folder(
            folder_path=".",
            repo_id=repo_name,
            ignore_patterns=[
                "*.pyc",
                "__pycache__",
                "*.egg-info",
                "build",
                "dist",
                ".git",
                ".gitignore",
                "multi_model_env",
                "*.png",
                "*.jpg",
                "*.jpeg",
                "demo_task_history.json",
                "task_history.json",
                "generated_image_*"
            ]
        )
        
        print(f"🎉 Successfully uploaded to https://huggingface.co/{repo_name}")
        
    except Exception as e:
        print(f"❌ Error uploading to Hugging Face: {e}")
        return False
    
    return True

def create_model_card():
    """Create a model card for the repository"""
    model_card_content = """---
language:
- en
license: mit
library_name: multi-model-orchestrator
tags:
- ai
- machine-learning
- multimodal
- image-captioning
- text-to-image
- orchestration
- transformers
- pytorch
---

# Multi-Model Orchestrator

A sophisticated multi-model orchestration system that manages parent-child LLM relationships, specifically integrating CLIP-GPT2 image captioner and Flickr30k text-to-image models.

## 🚀 Features

### **Parent Orchestrator**
- **Intelligent Task Routing**: Automatically routes tasks to appropriate child models
- **Model Management**: Handles loading, caching, and lifecycle of child models
- **Error Handling**: Robust error handling and recovery mechanisms
- **Task History**: Comprehensive logging and monitoring of all operations
- **Async Support**: Both synchronous and asynchronous processing modes

### **Child Models**
- **CLIP-GPT2 Image Captioner**: Converts images to descriptive text captions
- **Flickr30k Text-to-Image**: Generates images from text descriptions
- **Extensible Architecture**: Easy to add new child models

## 📦 Installation

```bash
pip install git+https://huggingface.co/kunaliitkgp09/multi-model-orchestrator
```

## 🎯 Quick Start

```python
from multi_model_orchestrator import SimpleMultiModelOrchestrator

# Initialize orchestrator
orchestrator = SimpleMultiModelOrchestrator()
orchestrator.initialize_models()

# Generate caption from image
caption = orchestrator.generate_caption("sample_image.jpg")
print(f"Caption: {caption}")

# Generate image from text
image_path = orchestrator.generate_image("A beautiful sunset over mountains")
print(f"Generated image: {image_path}")
```

## 🔗 Model Integration

### **Child Model 1: CLIP-GPT2 Image Captioner**
- **Model**: `kunaliitkgp09/clip-gpt2-image-captioner`
- **Task**: Image-to-text captioning
- **Performance**: ~40% accuracy on test samples

### **Child Model 2: Flickr30k Text-to-Image**
- **Model**: `kunaliitkgp09/flickr30k-text-to-image`
- **Task**: Text-to-image generation
- **Performance**: Fine-tuned on Flickr30k dataset

## 📊 Usage Examples

### **Multimodal Processing**
```python
# Process both image and text together
results = orchestrator.process_multimodal_task(
    image_path="sample_image.jpg",
    text_prompt="A serene landscape with mountains"
)

print("Caption:", results["caption"])
print("Generated Image:", results["generated_image"])
```

### **Async Processing**
```python
from multi_model_orchestrator import AsyncMultiModelOrchestrator
import asyncio

async def async_example():
    orchestrator = AsyncMultiModelOrchestrator()
    orchestrator.initialize_models()
    
    results = await orchestrator.process_multimodal_async(
        image_path="sample_image.jpg",
        text_prompt="A futuristic cityscape"
    )
    return results

asyncio.run(async_example())
```

## 🎯 Use Cases

- **Content Creation**: Generate captions and images for social media
- **Research and Development**: Model performance comparison and prototyping
- **Production Systems**: Automated content generation pipelines
- **Educational Applications**: AI model demonstration and learning

## 📈 Performance Metrics

- **Processing Time**: Optimized for real-time applications
- **Memory Usage**: Efficient GPU/CPU memory management
- **Success Rate**: Robust error handling and recovery
- **Extensibility**: Easy integration of new child models

## 🤝 Contributing

Contributions are welcome! Please feel free to submit pull requests or open issues for:
- New child model integrations
- Performance improvements
- Bug fixes
- Documentation enhancements

## 📄 License

This project is licensed under the MIT License.

## 🙏 Acknowledgments

- **CLIP-GPT2 Model**: [kunaliitkgp09/clip-gpt2-image-captioner](https://huggingface.co/kunaliitkgp09/clip-gpt2-image-captioner)
- **Stable Diffusion Model**: [kunaliitkgp09/flickr30k-text-to-image](https://huggingface.co/kunaliitkgp09/flickr30k-text-to-image)
- **Hugging Face**: For providing the model hosting platform
- **PyTorch**: For the deep learning framework
- **Transformers**: For the model loading and processing utilities

---

**Happy Orchestrating! 🚀**
"""
    
    with open("README.md", "w") as f:
        f.write(model_card_content)
    
    print("✅ Model card created!")

def main():
    """Main upload process"""
    print("🚀 Starting upload process to Hugging Face Hub...")
    
    # Create model card
    create_model_card()
    
    # Build package
    try:
        build_package()
    except Exception as e:
        print(f"❌ Error building package: {e}")
        return
    
    # Upload to Hugging Face
    success = upload_to_huggingface()
    
    if success:
        print("\n🎉 Upload completed successfully!")
        print("🔗 View your repository at: https://huggingface.co/kunaliitkgp09/multi-model-orchestrator")
        print("\n📦 Install with:")
        print("pip install git+https://huggingface.co/kunaliitkgp09/multi-model-orchestrator")
    else:
        print("\n❌ Upload failed. Please check the error messages above.")

if __name__ == "__main__":
    import sys
    main()