Update README.md
Browse files
README.md
CHANGED
@@ -22,6 +22,7 @@ This repository provides a ControlNet that supports mask-based image inpainting
|
|
22 |
- This ControlNet consists of 6 double blocks copied from the pretrained transformer layers.
|
23 |
- We train the model from scratch for 65K steps using a dataset of 10M high-quality general and human images.
|
24 |
- We train at 1328x1328 resolution in BFloat16, batch size=128, learning rate=4e-5. We set the text drop ratio to 0.10.
|
|
|
25 |
|
26 |
|
27 |
# Showcases
|
@@ -83,7 +84,7 @@ image.save(f"qwenimage_cn_inpaint_result.png")
|
|
83 |
```
|
84 |
|
85 |
# ComfyUI Support
|
86 |
-
[ComfyUI](https://www.comfy.org/) offers native support for Qwen-Image-ControlNet-Inpainting.
|
87 |
|
88 |
# Community Support
|
89 |
[Liblib AI](https://www.liblib.art/) offers native support for Qwen-Image-ControlNet-Inpainting. [Visit](https://www.liblib.art) for online WebUI or ComfyUI inference.
|
|
|
22 |
- This ControlNet consists of 6 double blocks copied from the pretrained transformer layers.
|
23 |
- We train the model from scratch for 65K steps using a dataset of 10M high-quality general and human images.
|
24 |
- We train at 1328x1328 resolution in BFloat16, batch size=128, learning rate=4e-5. We set the text drop ratio to 0.10.
|
25 |
+
- This model supports Object replacement, Text modification, Background replacement, Outpainting.
|
26 |
|
27 |
|
28 |
# Showcases
|
|
|
84 |
```
|
85 |
|
86 |
# ComfyUI Support
|
87 |
+
[ComfyUI](https://www.comfy.org/) offers native support for Qwen-Image-ControlNet-Inpainting. The official workflow can be found [here](https://raw.githubusercontent.com/Comfy-Org/workflow_templates/refs/heads/main/templates/image_qwen_image_instantx_inpainting_controlnet.json). Make sure your ComfyUI version is >=0.3.59.
|
88 |
|
89 |
# Community Support
|
90 |
[Liblib AI](https://www.liblib.art/) offers native support for Qwen-Image-ControlNet-Inpainting. [Visit](https://www.liblib.art) for online WebUI or ComfyUI inference.
|