Update README.md
Browse files
README.md
CHANGED
@@ -29,11 +29,17 @@ configs:
|
|
29 |
path: data/validation-*
|
30 |
---
|
31 |
|
32 |
-
Experiments
|
33 |
|
34 |
-
This dataset is derived from [conceptual captions](https://huggingface.co/datasets/
|
35 |
-
|
36 |
-
For
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
| Train Iter | hard rock artist performing music | football player during a match | concept vector illustration showing a flag | police officer and soldiers arrest military combatant | bird on a tree |
|
39 |
| ---- | ---- | ---- | ---- | ---- | ---- |
|
|
|
29 |
path: data/validation-*
|
30 |
---
|
31 |
|
32 |
+
## Experiments for training Auto Regressive models for text-to-image generation
|
33 |
|
34 |
+
This dataset is derived from [conceptual captions](https://huggingface.co/datasets/pixparse/cc3m-wds) (CC3M) which contains roughly 3.3M image and caption pairs
|
35 |
+
|
36 |
+
For images we use [1d-tokenizer](https://github.com/bytedance/1d-tokenizer) by [bytedance](https://www.bytedance.com/en/) which tokenizes a 256 * 256 image into 32 tokens while still achieving SOTA fidelity ratio
|
37 |
+
|
38 |
+
For text we train a BPE based tokenizer on the image captions dataset with a vocab size set to 30K, where 4096 tokens where used to represent images, 9 to represent some special tokens and the remaining 25895 tokens for text
|
39 |
+
|
40 |
+
## Training Procedure
|
41 |
+
|
42 |
+
For training we prompt the model to generate an image based on a text such as: ""
|
43 |
|
44 |
| Train Iter | hard rock artist performing music | football player during a match | concept vector illustration showing a flag | police officer and soldiers arrest military combatant | bird on a tree |
|
45 |
| ---- | ---- | ---- | ---- | ---- | ---- |
|