End of training
Browse files- README.md +76 -0
- logs/dreambooth-flux-dev-lora-advanced/1738777564.476361/events.out.tfevents.1738777564.150-136-220-21.2154775.1 +3 -0
- logs/dreambooth-flux-dev-lora-advanced/1738777564.478267/hparams.yml +80 -0
- logs/dreambooth-flux-dev-lora-advanced/1738782567.1524553/events.out.tfevents.1738782567.150-136-220-21.2156130.1 +3 -0
- logs/dreambooth-flux-dev-lora-advanced/1738782567.1544058/hparams.yml +80 -0
- logs/dreambooth-flux-dev-lora-advanced/events.out.tfevents.1738777564.150-136-220-21.2154775.0 +3 -0
- logs/dreambooth-flux-dev-lora-advanced/events.out.tfevents.1738782567.150-136-220-21.2156130.0 +3 -0
- pytorch_lora_weights.safetensors +3 -0
README.md
ADDED
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: black-forest-labs/FLUX.1-dev
|
3 |
+
library_name: diffusers
|
4 |
+
license: other
|
5 |
+
instance_prompt: a puppy, yarn art style
|
6 |
+
widget: []
|
7 |
+
tags:
|
8 |
+
- text-to-image
|
9 |
+
- diffusers-training
|
10 |
+
- diffusers
|
11 |
+
- lora
|
12 |
+
- flux
|
13 |
+
- flux-diffusers
|
14 |
+
- template:sd-lora
|
15 |
+
---
|
16 |
+
|
17 |
+
<!-- This model card has been generated automatically according to the information the training script had access to. You
|
18 |
+
should probably proofread and complete it, then remove this comment. -->
|
19 |
+
|
20 |
+
|
21 |
+
# Flux DreamBooth LoRA - linoyts/yarn_flux_700_all_attn_layers
|
22 |
+
|
23 |
+
<Gallery />
|
24 |
+
|
25 |
+
## Model description
|
26 |
+
|
27 |
+
These are linoyts/yarn_flux_700_all_attn_layers DreamBooth LoRA weights for black-forest-labs/FLUX.1-dev.
|
28 |
+
|
29 |
+
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md).
|
30 |
+
|
31 |
+
Was LoRA for the text encoder enabled? False.
|
32 |
+
|
33 |
+
Pivotal tuning was enabled: False.
|
34 |
+
|
35 |
+
## Trigger words
|
36 |
+
|
37 |
+
You should use a puppy, yarn art style to trigger the image generation.
|
38 |
+
|
39 |
+
## Download model
|
40 |
+
|
41 |
+
[Download the *.safetensors LoRA](linoyts/yarn_flux_700_all_attn_layers/tree/main) in the Files & versions tab.
|
42 |
+
|
43 |
+
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
|
44 |
+
|
45 |
+
```py
|
46 |
+
from diffusers import AutoPipelineForText2Image
|
47 |
+
import torch
|
48 |
+
|
49 |
+
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
|
50 |
+
pipeline.load_lora_weights('linoyts/yarn_flux_700_all_attn_layers', weight_name='pytorch_lora_weights.safetensors')
|
51 |
+
|
52 |
+
image = pipeline('a puppy, yarn art style').images[0]
|
53 |
+
```
|
54 |
+
|
55 |
+
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
56 |
+
|
57 |
+
## License
|
58 |
+
|
59 |
+
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
60 |
+
|
61 |
+
|
62 |
+
## Intended uses & limitations
|
63 |
+
|
64 |
+
#### How to use
|
65 |
+
|
66 |
+
```python
|
67 |
+
# TODO: add an example code snippet for running this diffusion pipeline
|
68 |
+
```
|
69 |
+
|
70 |
+
#### Limitations and bias
|
71 |
+
|
72 |
+
[TODO: provide examples of latent issues and potential remediations]
|
73 |
+
|
74 |
+
## Training details
|
75 |
+
|
76 |
+
[TODO: describe the data used to train the model]
|
logs/dreambooth-flux-dev-lora-advanced/1738777564.476361/events.out.tfevents.1738777564.150-136-220-21.2154775.1
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8ddadca1d712e8ef37c01044a5bf0e9031aafe1dfb8027316301289a90c05d5c
|
3 |
+
size 3534
|
logs/dreambooth-flux-dev-lora-advanced/1738777564.478267/hparams.yml
ADDED
@@ -0,0 +1,80 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
adam_beta1: 0.9
|
2 |
+
adam_beta2: 0.999
|
3 |
+
adam_epsilon: 1.0e-08
|
4 |
+
adam_weight_decay: 0.0001
|
5 |
+
adam_weight_decay_text_encoder: 0.001
|
6 |
+
allow_tf32: false
|
7 |
+
cache_dir: null
|
8 |
+
cache_latents: false
|
9 |
+
caption_column: null
|
10 |
+
center_crop: false
|
11 |
+
checkpointing_steps: 500
|
12 |
+
checkpoints_total_limit: null
|
13 |
+
class_data_dir: null
|
14 |
+
class_prompt: null
|
15 |
+
dataloader_num_workers: 0
|
16 |
+
dataset_config_name: null
|
17 |
+
dataset_name: Norod78/Yarn-art-style
|
18 |
+
enable_t5_ti: false
|
19 |
+
gradient_accumulation_steps: 1
|
20 |
+
gradient_checkpointing: false
|
21 |
+
guidance_scale: 3.5
|
22 |
+
hub_model_id: null
|
23 |
+
hub_token: null
|
24 |
+
image_column: image
|
25 |
+
initializer_concept: null
|
26 |
+
instance_data_dir: null
|
27 |
+
instance_prompt: a puppy, yarn art style
|
28 |
+
learning_rate: 0.0001
|
29 |
+
local_rank: -1
|
30 |
+
logging_dir: logs
|
31 |
+
logit_mean: 0.0
|
32 |
+
logit_std: 1.0
|
33 |
+
lora_layers: null
|
34 |
+
lr_num_cycles: 1
|
35 |
+
lr_power: 1.0
|
36 |
+
lr_scheduler: constant
|
37 |
+
lr_warmup_steps: 500
|
38 |
+
max_grad_norm: 1.0
|
39 |
+
max_sequence_length: 512
|
40 |
+
max_train_steps: 5
|
41 |
+
mixed_precision: bf16
|
42 |
+
mode_scale: 1.29
|
43 |
+
num_class_images: 100
|
44 |
+
num_new_tokens_per_abstraction: null
|
45 |
+
num_train_epochs: 1
|
46 |
+
num_validation_images: 4
|
47 |
+
optimizer: AdamW
|
48 |
+
output_dir: yarn_flux_700_all_attn_layers
|
49 |
+
pretrained_model_name_or_path: black-forest-labs/FLUX.1-dev
|
50 |
+
prior_generation_precision: null
|
51 |
+
prior_loss_weight: 1.0
|
52 |
+
prodigy_beta3: null
|
53 |
+
prodigy_decouple: true
|
54 |
+
prodigy_safeguard_warmup: true
|
55 |
+
prodigy_use_bias_correction: true
|
56 |
+
push_to_hub: false
|
57 |
+
random_flip: false
|
58 |
+
rank: 4
|
59 |
+
repeats: 1
|
60 |
+
report_to: tensorboard
|
61 |
+
resolution: 512
|
62 |
+
resume_from_checkpoint: null
|
63 |
+
revision: null
|
64 |
+
sample_batch_size: 4
|
65 |
+
scale_lr: false
|
66 |
+
seed: null
|
67 |
+
text_encoder_lr: 5.0e-06
|
68 |
+
token_abstraction: yarn art style
|
69 |
+
train_batch_size: 4
|
70 |
+
train_text_encoder: false
|
71 |
+
train_text_encoder_frac: 1.0
|
72 |
+
train_text_encoder_ti: false
|
73 |
+
train_text_encoder_ti_frac: 0.5
|
74 |
+
train_transformer_frac: 1.0
|
75 |
+
use_8bit_adam: false
|
76 |
+
validation_epochs: 50
|
77 |
+
validation_prompt: null
|
78 |
+
variant: null
|
79 |
+
weighting_scheme: none
|
80 |
+
with_prior_preservation: false
|
logs/dreambooth-flux-dev-lora-advanced/1738782567.1524553/events.out.tfevents.1738782567.150-136-220-21.2156130.1
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7d603a0a7c671aba37becd19dbb61478fb285be011121a56473a3e7daa0b9e97
|
3 |
+
size 3534
|
logs/dreambooth-flux-dev-lora-advanced/1738782567.1544058/hparams.yml
ADDED
@@ -0,0 +1,80 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
adam_beta1: 0.9
|
2 |
+
adam_beta2: 0.999
|
3 |
+
adam_epsilon: 1.0e-08
|
4 |
+
adam_weight_decay: 0.0001
|
5 |
+
adam_weight_decay_text_encoder: 0.001
|
6 |
+
allow_tf32: false
|
7 |
+
cache_dir: null
|
8 |
+
cache_latents: false
|
9 |
+
caption_column: null
|
10 |
+
center_crop: false
|
11 |
+
checkpointing_steps: 500
|
12 |
+
checkpoints_total_limit: null
|
13 |
+
class_data_dir: null
|
14 |
+
class_prompt: null
|
15 |
+
dataloader_num_workers: 0
|
16 |
+
dataset_config_name: null
|
17 |
+
dataset_name: Norod78/Yarn-art-style
|
18 |
+
enable_t5_ti: false
|
19 |
+
gradient_accumulation_steps: 1
|
20 |
+
gradient_checkpointing: false
|
21 |
+
guidance_scale: 3.5
|
22 |
+
hub_model_id: null
|
23 |
+
hub_token: null
|
24 |
+
image_column: image
|
25 |
+
initializer_concept: null
|
26 |
+
instance_data_dir: null
|
27 |
+
instance_prompt: a puppy, yarn art style
|
28 |
+
learning_rate: 0.0001
|
29 |
+
local_rank: -1
|
30 |
+
logging_dir: logs
|
31 |
+
logit_mean: 0.0
|
32 |
+
logit_std: 1.0
|
33 |
+
lora_layers: null
|
34 |
+
lr_num_cycles: 1
|
35 |
+
lr_power: 1.0
|
36 |
+
lr_scheduler: constant
|
37 |
+
lr_warmup_steps: 500
|
38 |
+
max_grad_norm: 1.0
|
39 |
+
max_sequence_length: 512
|
40 |
+
max_train_steps: 5
|
41 |
+
mixed_precision: bf16
|
42 |
+
mode_scale: 1.29
|
43 |
+
num_class_images: 100
|
44 |
+
num_new_tokens_per_abstraction: null
|
45 |
+
num_train_epochs: 1
|
46 |
+
num_validation_images: 4
|
47 |
+
optimizer: AdamW
|
48 |
+
output_dir: yarn_flux_700_all_attn_layers
|
49 |
+
pretrained_model_name_or_path: black-forest-labs/FLUX.1-dev
|
50 |
+
prior_generation_precision: null
|
51 |
+
prior_loss_weight: 1.0
|
52 |
+
prodigy_beta3: null
|
53 |
+
prodigy_decouple: true
|
54 |
+
prodigy_safeguard_warmup: true
|
55 |
+
prodigy_use_bias_correction: true
|
56 |
+
push_to_hub: false
|
57 |
+
random_flip: false
|
58 |
+
rank: 4
|
59 |
+
repeats: 1
|
60 |
+
report_to: tensorboard
|
61 |
+
resolution: 512
|
62 |
+
resume_from_checkpoint: null
|
63 |
+
revision: null
|
64 |
+
sample_batch_size: 4
|
65 |
+
scale_lr: false
|
66 |
+
seed: null
|
67 |
+
text_encoder_lr: 5.0e-06
|
68 |
+
token_abstraction: yarn art style
|
69 |
+
train_batch_size: 4
|
70 |
+
train_text_encoder: false
|
71 |
+
train_text_encoder_frac: 1.0
|
72 |
+
train_text_encoder_ti: false
|
73 |
+
train_text_encoder_ti_frac: 0.5
|
74 |
+
train_transformer_frac: 1.0
|
75 |
+
use_8bit_adam: false
|
76 |
+
validation_epochs: 50
|
77 |
+
validation_prompt: null
|
78 |
+
variant: null
|
79 |
+
weighting_scheme: none
|
80 |
+
with_prior_preservation: false
|
logs/dreambooth-flux-dev-lora-advanced/events.out.tfevents.1738777564.150-136-220-21.2154775.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1f9d05e898102237e7da28abddeac8934b1881a4874af9d63f4c9508c196e686
|
3 |
+
size 88
|
logs/dreambooth-flux-dev-lora-advanced/events.out.tfevents.1738782567.150-136-220-21.2156130.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4ceeab5b14b187f4c300f1be78a966589224dd225a7f04a0ee9936388ef05180
|
3 |
+
size 88
|
pytorch_lora_weights.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:21cae03f1a1ebf692d45d999501ac2c652721b4c93272175b06e56eca03ca65e
|
3 |
+
size 22504080
|