---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- abstract
- style
- ai
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: opus_ascii
widget:
- text: 'ascii art, opus_ascii, will smith eating spaghetti'
output:
url: >-
31654319.jpeg
- text: 'ascii art, opus_ascii, will smith eating spaghetti'
output:
url: >-
31654332.jpeg
- text: ' '
output:
url: >-
31654335.jpeg
- text: ' '
output:
url: >-
31654340.jpeg
---
# Opus ASCII FLUX
Trained on Claude Opus 3 ASCII art output latent space discovered by @dyot_meet_mat on twitter
## Trigger words You should use `opus_ascii` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/veryVANYA/opus-ascii-flux/tree/main) them in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch device = "cuda" if torch.cuda.is_available() else "cpu" pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device) pipeline.load_lora_weights('veryVANYA/opus-ascii-flux', weight_name='flux_opus_ascii.safetensors') image = pipeline('`opus_ascii`').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)