Clay_Toy_lora / README.md
Xsong123's picture
Upload README.md with huggingface_hub
a57cf3f verified
|
raw
history blame
2.45 kB
metadata
language:
  - en
base_model:
  - black-forest-labs/FLUX.1-Kontext-dev
pipeline_tag: image-to-image
library_name: diffusers
tags:
  - Style
  - Clay Toy
  - FluxKontext
  - Image-to-Image

Clay Toy Style LoRA for FLUX.1 Kontext Model

This repository provides the Clay Toy style LoRA adapter for the FLUX.1 Kontext Model. This LoRA is part of a collection of 20+ style LoRAs trained on high-quality paired data generated by GPT-4o from the OmniConsistency dataset.

Comparison02 Comparison01

Contributor: Tian YE & Song FEI, HKUST Guangzhou.

Style Showcase

Here are some examples of images generated using this style LoRA:

Clay Toy Style Example Clay Toy Style Example Clay Toy Style Example Clay Toy Style Example Clay Toy Style Example Clay Toy Style Example Clay Toy Style Example

Inference Example

from huggingface_hub import hf_hub_download
from diffusers import FluxKontextPipeline
from diffusers.utils import load_image
import torch

# Define the style and model details
STYLE_NAME = "Clay_Toy"
LORA_FILENAME = "Clay_Toy_lora_weights.safetensors"
REPO_ID = "Kontext-Style/Clay_Toy_lora"

# Download the LoRA weights
# Make sure you have created a folder named 'LoRAs' in your current directory
hf_hub_download(repo_id=REPO_ID, filename=LORA_FILENAME, local_dir="./LoRAs")

# Load an image
image = load_image("https://huggingface.co/datasets/black-forest-labs/kontext-bench/resolve/main/test/images/0003.jpg").resize((1024, 1024))

# Load the pipeline
pipeline = FluxKontextPipeline.from_pretrained("black-forest-labs/FLUX.1-Kontext-dev", torch_dtype=torch.bfloat16).to('cuda')

# Load and set the LoRA adapter
pipeline.load_lora_weights(f"./LoRAs/{LORA_FILENAME}", adapter_name="lora")
pipeline.set_adapters(["lora"], adapter_weights=[1])

# Run inference
prompt = f"Turn this image into the {STYLE_NAME.replace('_', ' ')} style."
result_image = pipeline(image=image, prompt=prompt, height=1024, width=1024, num_inference_steps=24).images[0]
result_image.save(f"{STYLE_NAME}.png")

print(f"Image saved as {STYLE_NAME}.png")

Feel free to open an issue or contact us for feedback or collaboration!