Datasets:

Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
brook-park commited on
Commit
b7d609d
·
1 Parent(s): c03b77b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -24
README.md CHANGED
@@ -10,12 +10,7 @@ license:
10
  - cc-by-4.0
11
  multilinguality:
12
  - monolingual
13
- pretty_name: "**COYO** is a large-scale dataset that contains **image-text pairs**\
14
- \ as well as many other **meta-attributes** to increase the usability to train various\
15
- \ models. Our dataset follows the similar strategy in previous vision-and-language\
16
- \ datasets, collecting many informative pairs of alt-text and its associated image\
17
- \ in HTML documents. We expect COYO to be used to train popular large-scale foundation\
18
- \ models \ncomplementary to other similar datasets."
19
  size_categories:
20
  - 100M<n<1B
21
  source_datasets:
@@ -25,12 +20,12 @@ tags:
25
  task_categories:
26
  - text-to-image
27
  - image-to-text
28
- - zero-shot-image-classification
29
  task_ids:
30
  - image-captioning
31
  ---
32
 
33
- # Dataset Card for [Dataset Name]
34
 
35
  ## Table of Contents
36
  - [Table of Contents](#table-of-contents)
@@ -59,63 +54,128 @@ task_ids:
59
 
60
  ## Dataset Description
61
 
62
- - **Homepage:**
63
- - **Repository:**
64
  - **Paper:**
65
  - **Leaderboard:**
66
- - **Point of Contact:**
67
 
68
  ### Dataset Summary
69
 
70
- [More Information Needed]
 
71
 
72
  ### Supported Tasks and Leaderboards
73
 
74
- [More Information Needed]
 
 
75
 
76
  ### Languages
77
 
78
- [More Information Needed]
79
 
80
  ## Dataset Structure
81
 
82
  ### Data Instances
83
 
84
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85
 
86
  ### Data Fields
87
 
88
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89
 
90
  ### Data Splits
91
 
92
- [More Information Needed]
93
 
94
  ## Dataset Creation
95
 
96
  ### Curation Rationale
97
 
98
- [More Information Needed]
99
 
100
  ### Source Data
101
 
102
  #### Initial Data Collection and Normalization
103
 
104
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
105
 
106
  #### Who are the source language producers?
107
 
108
- [More Information Needed]
109
 
110
  ### Annotations
111
 
112
  #### Annotation process
113
 
114
- [More Information Needed]
115
 
116
  #### Who are the annotators?
117
 
118
- [More Information Needed]
119
 
120
  ### Personal and Sensitive Information
121
 
@@ -129,11 +189,11 @@ task_ids:
129
 
130
  ### Discussion of Biases
131
 
132
- [More Information Needed]
133
 
134
  ### Other Known Limitations
135
 
136
- [More Information Needed]
137
 
138
  ## Additional Information
139
 
 
10
  - cc-by-4.0
11
  multilinguality:
12
  - monolingual
13
+ pretty_name: COYO-700M
 
 
 
 
 
14
  size_categories:
15
  - 100M<n<1B
16
  source_datasets:
 
20
  task_categories:
21
  - text-to-image
22
  - image-to-text
23
+ - zero-shot-classification
24
  task_ids:
25
  - image-captioning
26
  ---
27
 
28
+ # Dataset Card for COYO-700M
29
 
30
  ## Table of Contents
31
  - [Table of Contents](#table-of-contents)
 
54
 
55
  ## Dataset Description
56
 
57
+ - **Homepage:** [COYO homepage](https://kakaobrain.com/contents/?contentId=7eca73e3-3089-43cb-b701-332e8a1743fd)
58
+ - **Repository:** [COYO repository](https://github.kakaocorp.com/large-scale/coyo-dataset)
59
  - **Paper:**
60
  - **Leaderboard:**
61
+ - **Point of Contact:** [COYO email]([email protected])
62
 
63
  ### Dataset Summary
64
 
65
+ **COYO-700M** is a large-scale dataset that contains **747M image-text pairs** as well as many other **meta-attributes** to increase the usability to train various models. Our dataset follows the similar strategy in previous vision-and-language datasets, collecting many informative pairs of alt-text and its associated image in HTML documents. We expect COYO to be used to train popular large-scale foundation models
66
+ complementary to other similar datasets. For more details on the data acquisition process, please refer to the technical paper to be released later.
67
 
68
  ### Supported Tasks and Leaderboards
69
 
70
+ We empirically validated the quality of COYO dataset by re-implementing popular models such as [ALIGN](https://arxiv.org/abs/2102.05918), [unCLIP](https://arxiv.org/abs/2204.06125), and [ViT](https://arxiv.org/abs/2010.11929).
71
+ We trained these models on COYO-700M or its subsets from scratch, achieving competitive performance to the reported numbers or generated samples in the original papers.
72
+ Our pre-trained models and training codes will be released soon along with the technical paper.
73
 
74
  ### Languages
75
 
76
+ The texts in the COYO-700M dataset consist of English.
77
 
78
  ## Dataset Structure
79
 
80
  ### Data Instances
81
 
82
+ Each instance in COYO-700M represents single image-text pair information with meta-attributes:
83
+ ```
84
+ {
85
+ 'id': 841814333321,
86
+ 'url': 'https://blog.dogsof.com/wp-content/uploads/2021/03/Image-from-iOS-5-e1614711641382.jpg',
87
+ 'text': 'A Pomsky dog sitting and smiling in field of orange flowers',
88
+ 'width': 1000,
89
+ 'height': 988,
90
+ 'image_phash': 'c9b6a7d8469c1959',
91
+ 'text_length': 59,
92
+ 'word_count': 11,
93
+ 'num_tokens_bert': 13,
94
+ 'num_tokens_gpt': 12,
95
+ 'num_faces': 0,
96
+ 'clip_similarity_vitb32': 0.4296875,
97
+ 'clip_similarity_vitl14': 0.35205078125,
98
+ 'nsfw_score_opennsfw2': 0.00031447410583496094,
99
+ 'nsfw_score_gantman': 0.03298913687467575,
100
+ 'watermark_score': 0.1014641746878624,
101
+ 'aesthetic_score_laion_v2': 5.435476303100586
102
+ }
103
+ ```
104
 
105
  ### Data Fields
106
 
107
+ | name | type | description |
108
+ |--------------------------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
109
+ | id | long | Unique 64-bit integer ID generated by [monotonically_increasing_id()](https://spark.apache.org/docs/3.1.3/api/python/reference/api/pyspark.sql.functions.monotonically_increasing_id.html) |
110
+ | url | string | The image URL extracted from the `src` attribute of the `<img>` tag |
111
+ | text | string | The text extracted from the `alt` attribute of the `<img>` tag |
112
+ | width | integer | The width of the image |
113
+ | height | integer | The height of the image |
114
+ | image_phash | string | The [perceptual hash(pHash)](http://www.phash.org/) of the image |
115
+ | text_length | integer | The length of the text |
116
+ | word_count | integer | The number of words seperated by spaces. |
117
+ | num_tokens_bert | integer | The number of tokens using [BertTokenizer](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertTokenizer) |
118
+ | num_tokens_gpt | integer | The number of tokens using [GPT2TokenizerFast](https://huggingface.co/docs/transformers/model_doc/gpt2#transformers.GPT2TokenizerFast) |
119
+ | num_faces | integer | The number of faces in the image detected by [SCRFD](https://insightface.ai/scrfd) |
120
+ | clip_similarity_vitb32 | float | The cosine similarity between text and image(ViT-B/32) embeddings by [OpenAI CLIP](https://github.com/openai/CLIP) |
121
+ | clip_similarity_vitl14 | float | The cosine similarity between text and image(ViT-L/14) embeddings by [OpenAI CLIP](https://github.com/openai/CLIP) |
122
+ | nsfw_score_opennsfw2 | float | The NSFW score of the image by [OpenNSFW2](https://github.com/bhky/opennsfw2) |
123
+ | nsfw_score_gantman | float | The NSFW score of the image by [GantMan/NSFW](https://github.com/GantMan/nsfw_model) |
124
+ | watermark_score | float | The watermark probability of the image by our internal model |
125
+ | aesthetic_score_laion_v2 | float | The aesthetic score of the image by [LAION-Aesthetics-Predictor-V2](https://github.com/christophschuhmann/improved-aesthetic-predictor) |
126
 
127
  ### Data Splits
128
 
129
+ Data was not split, since the evaluation was expected to be performed on more widely used downstream task(s).
130
 
131
  ## Dataset Creation
132
 
133
  ### Curation Rationale
134
 
135
+ Similar to most vision-and-language datasets, our primary goal in the data creation process is to collect many pairs of alt-text and image sources in HTML documents crawled from the web. Therefore, We attempted to eliminate uninformative images or texts with minimal cost and improve our dataset's usability by adding various meta-attributes. Users can use these meta-attributes to sample a subset from COYO-700M and use it to train the desired model. For instance, the *num_faces* attribute could be used to make a subset like *COYO-Faces* and develop a privacy-preserving generative model.
136
 
137
  ### Source Data
138
 
139
  #### Initial Data Collection and Normalization
140
 
141
+ We collected about 10 billion pairs of alt-text and image source in HTML documents in [CommonCrawl](https://commoncrawl.org/) from Oct. 2020 to Aug. 2021. and eliminated uninformative pairs through the image and/or text level filtering process with minimal cost.
142
+
143
+ **Image Level**
144
+ * Include all image formats that Pillow library can decode
145
+ * Less than 5KB image size are dropped
146
+ * Images with aspect ratio is greater than 3.0 are dropped
147
+ * Images with min(width, height) < 200 are dropped
148
+ * Images are dropped if the score of [OpenNSFW2](https://github.com/yahoo/open_nsfw) or [GantMan/NSFW](https://github.com/GantMan/nsfw_model) is higher than 0.5
149
+ * Based on the image [pHash](http://www.phash.org/) value, we removed all duplicate images from external public datasets.
150
+ * ImageNet-1K/21K, Flickr-30K, MS-COCO, CC-3M, CC-12M
151
+
152
+ **Text Level**
153
+ * We collected only english text using [cld3](https://github.com/google/cld3)
154
+ * Consecutive whitespace characters are replaced with a single whitespace and whitespace before and after the sentence are removed
155
+ * e.g. `"\n \n Load image into Gallery viewer, valentine&amp;#39;s day roses\n \n" → "Load image into Gallery viewer, valentine&amp;#39;s day roses"`
156
+ * Any text with a length of 5 or less has been dropped
157
+ * Text that does not have a noun form has been dropped
158
+ * Text less than 3 words or more than 256 words and text over 1000 words were dropped
159
+ * All texts appearing more than 10 times have been dropped
160
+ * e.g. `“thumbnail for”, “image for”, “picture of”`
161
+
162
+ **Image-Text Level**
163
+ * Based on (image_phash, text), duplicated samples has been dropped
164
+ * Different text may exist for the same image URL.
165
 
166
  #### Who are the source language producers?
167
 
168
+ [Common Crawl](https://commoncrawl.org/) is the data source for COYO-700M.
169
 
170
  ### Annotations
171
 
172
  #### Annotation process
173
 
174
+ The dataset was built in a fully automated process that did not require human annotation.
175
 
176
  #### Who are the annotators?
177
 
178
+ No human annotation
179
 
180
  ### Personal and Sensitive Information
181
 
 
189
 
190
  ### Discussion of Biases
191
 
192
+ It will be described in a paper to be released soon.
193
 
194
  ### Other Known Limitations
195
 
196
+ It will be described in a paper to be released soon.
197
 
198
  ## Additional Information
199