File size: 3,700 Bytes
07c624d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
398729c
203ba8b
662acf9
666e23a
911145c
a902fbb
 
 
 
 
 
 
 
 
69b8f32
 
 
911145c
666e23a
662acf9
666e23a
bb9c7fb
e2a12b5
398729c
 
7340a8e
84aa97b
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
dataset_info:
  features:
  - name: __key__
    dtype: string
  - name: image_tokens
    sequence: int64
  - name: text_tokens
    sequence: int64
  - name: text
    dtype: string
  - name: data
    dtype: string
  splits:
  - name: train
    num_bytes: 2727128395
    num_examples: 2905954
  - name: validation
    num_bytes: 12618157
    num_examples: 13443
  download_size: 964606495
  dataset_size: 2739746552
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
---

# Experiments for training Auto Regressive models for text-to-image generation
This dataset is derived from [conceptual captions](https://huggingface.co/datasets/pixparse/cc3m-wds) (CC3M) which contains roughly 3.3M image and caption pairs. For images we use [1d-tokenizer](https://github.com/bytedance/1d-tokenizer) by [bytedance](https://www.bytedance.com/en/) which tokenizes a 256 * 256 image into 32 tokens while still achieving SOTA fidelity ratio. For text we train a BPE based tokenizer on the image captions dataset with a vocab size set to 30K, where 4096 tokens where used to represent images, 9 to represent some special tokens and the remaining 25895 tokens for text

# Visualization
<table>
  <tr>
    <td><img src="vis_1.png" alt="example 1" width="200"/></td>
    <td><img src="vis_2.png" alt="example 2" width="200"/></td>
    <td><img src="vis_3.png" alt="example 3" width="200"/></td>
    <td><img src="vis_4.png" alt="example 4" width="200"/></td>
  </tr>
</table>

# Inference
For generating images download and save the image_tokenizer and checkpoint-20000 in the root dir of this repo then run infer.py with your prompt


## Training Procedure
For training we prompt the model to generate an image based on a text such as: "a river has burst it 's banks and has spread out onto arable farmland alongside<|startofimage|><|image:2931|><|image:560|><|image:763|><|image:1539|><|image:3161|><|image:1997|><|image:3376|><|image:510|><|image:3036|><|image:1585|><|image:1853|><|image:1970|><|image:2687|><|image:1436|><|image:2213|><|image:3968|><|image:3999|><|image:877|><|image:725|><|image:3013|><|image:438|><|image:3159|><|image:2936|><|image:3003|><|image:2261|><|image:2137|><|image:3821|><|image:1513|><|image:3536|><|image:311|><|image:494|><|image:413|><|endofimage|>". We use use cross entropy loss with logits masked for the audio tokens as it showed performance improvements for speech-to-text tasks and employ the standard cross entorpy loss over the masked logits


| Train Iter | hard rock artist performing music | football player during a match | concept vector illustration showing a flag | police officer and soldiers arrest military combatant | bird on a tree |
| ---- | ---- | ---- | ---- | ---- | ---- | 
| 5000 | ![](./assets/1.png) | ![](./assets/2.png) | ![](./assets/3.png) | ![](./assets/4.png) | ![](./assets/5.png) |
| 6000 | ![](./assets/6.png) | ![](./assets/7.png) | ![](./assets/8.png) | ![](./assets/9.png) | ![](./assets/10.png) |
| 7000 | ![](./assets/7_0.png) | ![](./assets/7_1.png) | ![](./assets/7_2.png) | ![](./assets/7_3.png) | ![](./assets/7_4.png) |
| 8000 | ![](./assets/8_0.png) | ![](./assets/8_1.png) | ![](./assets/8_2.png) | ![](./assets/8_3.png) | ![](./assets/8_4.png) |
| 9000 | ![](./assets/9_0.png) | ![](./assets/9_1.png) | ![](./assets/9_2.png) | ![](./assets/9_3.png) | ![](./assets/9_4.png) |
| 10000 | ![](./assets/10_0.png) | ![](./assets/10_1.png) | ![](./assets/10_2.png) | ![](./assets/10_3.png) | ![](./assets/10_4.png) |
| 11000 | ![](./assets/11_0.png) | ![](./assets/11_1.png) | ![](./assets/11_2.png) | ![](./assets/11_3.png) | ![](./assets/11_4.png) |