Update README.md
Browse files
README.md
CHANGED
@@ -29,6 +29,12 @@ configs:
|
|
29 |
path: data/validation-*
|
30 |
---
|
31 |
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
| Train Iter | hard rock artist performing music | football player during a match | concept vector illustration showing a flag | police officer and soldiers arrest military combatant | bird on a tree |
|
33 |
| ---- | ---- | ---- | ---- | ---- | ---- |
|
34 |
| 5000 |  |  |  |  |  |
|
|
|
29 |
path: data/validation-*
|
30 |
---
|
31 |
|
32 |
+
Experiments on training Auto Regressive models for text-to-image generation
|
33 |
+
|
34 |
+
This dataset is derived from [conceptual captions](https://huggingface.co/datasets/google-research-datasets/conceptual_captions) (CC3M) which contains roughly 3.3M image and caption pairs
|
35 |
+
For images we use [1d-tokenizer](https://github.com/bytedance/1d-tokenizer) by bytedance which tokenizes a 256 * 256 image into 32 tokens while still achieving SOTA fidelity ratio
|
36 |
+
For text we train a BPE based tokenized on the captions. For training we use the standard cross entropy loss between the logits and the targets
|
37 |
+
|
38 |
| Train Iter | hard rock artist performing music | football player during a match | concept vector illustration showing a flag | police officer and soldiers arrest military combatant | bird on a tree |
|
39 |
| ---- | ---- | ---- | ---- | ---- | ---- |
|
40 |
| 5000 |  |  |  |  |  |
|