|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
base_model: |
|
- Qwen/Qwen-Image-Edit |
|
pipeline_tag: image-to-image |
|
tags: |
|
- lora |
|
- qwen |
|
- qwen-image |
|
- qwen-image-edit |
|
- image-editing |
|
- inscene |
|
- spatial-understanding |
|
- scene-coherence |
|
- computer-vision |
|
- InScene |
|
--- |
|
|
|
# Qwen Image Edit Inscene LoRA |
|
|
|
An open-source LoRA (Low-Rank Adaptation) model for Qwen-Image-Edit that specializes in in-scene image editing by [FlyMy.AI](https://flymy.ai). |
|
|
|
## 🌟 About FlyMy.AI |
|
|
|
Agentic Infra for GenAI. FlyMy.AI is a B2B infrastructure for building and running GenAI Media agents. |
|
|
|
**🔗 Useful Links:** |
|
- 🌐 [Official Website](https://flymy.ai) |
|
- 📚 [Documentation](https://docs.flymy.ai/intro) |
|
- 💬 [Discord Community](https://discord.com/invite/t6hPBpSebw) |
|
- 🤗 [LoRA Training Repository](https://github.com/FlyMyAI/flymyai-lora-trainer) |
|
- 🐦 [X (Twitter)](https://x.com/flymyai) |
|
- 💼 [LinkedIn](https://linkedin.com/company/flymyai) |
|
- 📺 [YouTube](https://youtube.com/@flymyai) |
|
- 📸 [Instagram](https://www.instagram.com/flymy_ai) |
|
|
|
--- |
|
|
|
## 🚀 Features |
|
|
|
- LoRA-based fine-tuning for efficient in-scene image editing |
|
- Specialized for Qwen-Image-Edit model |
|
- Enhanced control over scene composition and object positioning |
|
- Optimized for maintaining scene coherence during edits |
|
- Compatible with Hugging Face `diffusers` |
|
- Control-based image editing with improved spatial understanding |
|
|
|
--- |
|
|
|
## 📦 Installation |
|
|
|
1. Install required packages: |
|
```bash |
|
pip install torch torchvision diffusers transformers accelerate |
|
``` |
|
|
|
2. Install the latest `diffusers` from GitHub: |
|
```bash |
|
pip install git+https://github.com/huggingface/diffusers |
|
``` |
|
|
|
--- |
|
|
|
## 🧪 Usage |
|
|
|
### 🔧 Qwen-Image-Edit Initialization |
|
|
|
```python |
|
from diffusers import QwenImageEditPipeline |
|
import torch |
|
from PIL import Image |
|
|
|
# Load the pipeline |
|
pipeline = QwenImageEditPipeline.from_pretrained("Qwen/Qwen-Image-Edit") |
|
pipeline.to(torch.bfloat16) |
|
pipeline.to("cuda") |
|
``` |
|
|
|
### 🔌 Load LoRA Weights |
|
|
|
```python |
|
# Load trained LoRA weights for in-scene editing |
|
pipeline.load_lora_weights("./flymy_qwen_image_edit_inscene_lora.safetensors") |
|
``` |
|
|
|
### 🎨 Edit Image with Qwen-Image-Edit Inscene LoRA |
|
|
|
```python |
|
# Load input image |
|
image = Image.open("./assets/qie2_input.jpg").convert("RGB") |
|
|
|
# Define in-scene editing prompt |
|
prompt = "Make a shot in the same scene of the left hand securing the edge of the cutting board while the right hand tilts it, causing the chopped tomatoes to slide off into the pan, camera angle shifts slightly to the left to center more on the pan." |
|
|
|
# Generate edited image with enhanced scene understanding |
|
inputs = { |
|
"image": image, |
|
"prompt": prompt, |
|
"generator": torch.manual_seed(0), |
|
"true_cfg_scale": 4.0, |
|
"negative_prompt": " ", |
|
"num_inference_steps": 50, |
|
} |
|
|
|
with torch.inference_mode(): |
|
output = pipeline(**inputs) |
|
output_image = output.images[0] |
|
output_image.save("edited_image.png") |
|
``` |
|
|
|
### 🖼️ Sample Output - Qwen-Image-Edit Inscene |
|
|
|
**Input Image:** |
|
|
|
 |
|
|
|
**Prompt:** |
|
"Make a shot in the same scene of the left hand securing the edge of the cutting board while the right hand tilts it, causing the chopped tomatoes to slide off into the pan, camera angle shifts slightly to the left to center more on the pan." |
|
|
|
**Output without LoRA:** |
|
|
|
 |
|
|
|
**Output with Inscene LoRA:** |
|
|
|
 |
|
|
|
--- |
|
|
|
### Workflow Features |
|
|
|
- ✅ Pre-configured for Qwen-Image-Edit + Inscene LoRA inference |
|
- ✅ Optimized settings for in-scene editing quality |
|
- ✅ Enhanced spatial understanding and scene coherence |
|
- ✅ Easy prompt and parameter adjustment |
|
- ✅ Compatible with various input image types |
|
|
|
--- |
|
|
|
## 🎯 What is Inscene LoRA? |
|
|
|
This LoRA model is specifically trained to enhance Qwen-Image-Edit's ability to perform **in-scene image editing**. It focuses on: |
|
|
|
- **Scene Coherence**: Maintaining logical spatial relationships within the scene |
|
- **Object Positioning**: Better understanding of object placement and movement |
|
- **Camera Perspective**: Improved handling of viewpoint changes and camera movements |
|
- **Action Sequences**: Enhanced ability to depict sequential actions within the same scene |
|
- **Contextual Editing**: Preserving scene context while making targeted modifications |
|
|
|
--- |
|
|
|
## 🔧 Training Information |
|
|
|
This LoRA model was trained using the [FlyMy.AI LoRA Trainer](https://github.com/FlyMyAI/flymyai-lora-trainer) with: |
|
|
|
- **Base Model**: Qwen/Qwen-Image-Edit |
|
- **Training Focus**: In-scene image editing and spatial understanding |
|
- **Dataset**: Curated collection of scene-based editing examples (InScene dataset) |
|
- **Optimization**: Low-rank adaptation for efficient fine-tuning |
|
|
|
--- |
|
|
|
## 📊 Model Specifications |
|
|
|
- **Model Type**: LoRA (Low-Rank Adaptation) |
|
- **Base Model**: Qwen/Qwen-Image-Edit |
|
- **File Format**: SafeTensors (.safetensors) |
|
- **Specialization**: In-scene image editing |
|
- **Training Framework**: Diffusers + Accelerate |
|
- **Memory Efficient**: Optimized for consumer GPUs |
|
|
|
--- |
|
|
|
## 🤝 Support |
|
|
|
If you have questions or suggestions, join our community: |
|
|
|
- 🌐 [FlyMy.AI](https://flymy.ai) |
|
- 💬 [Discord Community](https://discord.com/invite/t6hPBpSebw) |
|
- 🐦 [Follow us on X](https://x.com/flymyai) |
|
- 💼 [Connect on LinkedIn](https://linkedin.com/company/flymyai) |
|
- 📧 [Support](mailto:[email protected]) |
|
|
|
**⭐ Don't forget to star the repository if you like it!** |
|
|
|
--- |
|
|
|
## 📄 License |
|
|
|
This project is licensed under the Apache 2.0 License - see the LICENSE file for details. |