Update README.md
Browse files
README.md
CHANGED
@@ -141,200 +141,79 @@ pretty_name: Danbooru 2025 Metadata
|
|
141 |
size_categories:
|
142 |
- 1M<n<10M
|
143 |
---
|
144 |
-
|
145 |
# Dataset Card for Danbooru 2025 Metadata
|
146 |
|
147 |
-
**
|
148 |
|
149 |
-
This
|
150 |
|
151 |
## Dataset Details
|
152 |
|
153 |
-
**
|
154 |
-
|
155 |
-
|
156 |
-
|
157 |
-
**Why choose this over other Danbooru scrapes?**
|
158 |
|
159 |
-
|
160 |
|
161 |
-
|
|
|
|
|
162 |
|
|
|
163 |
|
164 |
-
|
165 |
-
|
166 |
-
The dataset can be loaded or filtered with the Huggingface `datasets` library:
|
167 |
|
168 |
```python
|
169 |
-
from datasets import
|
170 |
|
171 |
-
|
172 |
-
df =
|
173 |
```
|
174 |
|
|
|
175 |
|
176 |
## Dataset Structure
|
177 |
|
178 |
-
|
179 |
|
180 |
```
|
181 |
-
Index([
|
182 |
-
|
183 |
-
|
184 |
-
|
185 |
-
|
186 |
-
|
187 |
-
|
188 |
-
|
189 |
-
|
190 |
-
|
191 |
-
|
192 |
-
|
193 |
-
|
194 |
-
|
195 |
-
|
196 |
-
|
197 |
-
|
198 |
-
|
199 |
```
|
200 |
|
201 |
-
<div>
|
202 |
-
<style scoped>
|
203 |
-
.dataframe tbody tr th:only-of-type {
|
204 |
-
vertical-align: middle;
|
205 |
-
}
|
206 |
-
|
207 |
-
.dataframe tbody tr th {
|
208 |
-
vertical-align: top;
|
209 |
-
}
|
210 |
-
|
211 |
-
.dataframe thead th {
|
212 |
-
text-align: right;
|
213 |
-
}
|
214 |
-
</style>
|
215 |
-
<table border="1" class="dataframe">
|
216 |
-
<thead>
|
217 |
-
<tr style="text-align: right;">
|
218 |
-
<th></th>
|
219 |
-
<th>approver_id</th>
|
220 |
-
<th>bit_flags</th>
|
221 |
-
<th>created_at</th>
|
222 |
-
<th>down_score</th>
|
223 |
-
<th>fav_count</th>
|
224 |
-
<th>file_ext</th>
|
225 |
-
<th>file_size</th>
|
226 |
-
<th>file_url</th>
|
227 |
-
<th>has_active_children</th>
|
228 |
-
<th>has_children</th>
|
229 |
-
<th>...</th>
|
230 |
-
<th>tag_count_meta</th>
|
231 |
-
<th>tag_string</th>
|
232 |
-
<th>tag_string_artist</th>
|
233 |
-
<th>tag_string_character</th>
|
234 |
-
<th>tag_string_copyright</th>
|
235 |
-
<th>tag_string_general</th>
|
236 |
-
<th>tag_string_meta</th>
|
237 |
-
<th>up_score</th>
|
238 |
-
<th>updated_at</th>
|
239 |
-
<th>uploader_id</th>
|
240 |
-
</tr>
|
241 |
-
</thead>
|
242 |
-
<tbody>
|
243 |
-
<tr>
|
244 |
-
<th>0</th>
|
245 |
-
<td>NaN</td>
|
246 |
-
<td>0</td>
|
247 |
-
<td>2015-08-07T23:23:45.072-04:00</td>
|
248 |
-
<td>0</td>
|
249 |
-
<td>66</td>
|
250 |
-
<td>jpg</td>
|
251 |
-
<td>4134797</td>
|
252 |
-
<td>https://cdn.donmai.us/original/a1/b3/a1b3d0fa9...</td>
|
253 |
-
<td>False</td>
|
254 |
-
<td>False</td>
|
255 |
-
<td>...</td>
|
256 |
-
<td>3</td>
|
257 |
-
<td>1girl absurdres ass bangle bikini black_bikini...</td>
|
258 |
-
<td>kyouka.</td>
|
259 |
-
<td>marie_(splatoon)</td>
|
260 |
-
<td>splatoon_(series) splatoon_1</td>
|
261 |
-
<td>1girl ass bangle bikini black_bikini blush bra...</td>
|
262 |
-
<td>absurdres commentary_request highres</td>
|
263 |
-
<td>15</td>
|
264 |
-
<td>2024-06-25T15:32:44.291-04:00</td>
|
265 |
-
<td>420773</td>
|
266 |
-
</tr>
|
267 |
-
<tr>
|
268 |
-
<th>1</th>
|
269 |
-
<td>NaN</td>
|
270 |
-
<td>0</td>
|
271 |
-
<td>2008-03-05T01:52:28.194-05:00</td>
|
272 |
-
<td>0</td>
|
273 |
-
<td>7</td>
|
274 |
-
<td>jpg</td>
|
275 |
-
<td>380323</td>
|
276 |
-
<td>https://cdn.donmai.us/original/d6/10/d6107a13b...</td>
|
277 |
-
<td>False</td>
|
278 |
-
<td>False</td>
|
279 |
-
<td>...</td>
|
280 |
-
<td>2</td>
|
281 |
-
<td>1girl aqua_hair bad_id bad_pixiv_id guitar hat...</td>
|
282 |
-
<td>shimeko</td>
|
283 |
-
<td>hatsune_miku</td>
|
284 |
-
<td>vocaloid</td>
|
285 |
-
<td>1girl aqua_hair guitar instrument long_hair so...</td>
|
286 |
-
<td>bad_id bad_pixiv_id</td>
|
287 |
-
<td>4</td>
|
288 |
-
<td>2018-01-23T00:32:10.080-05:00</td>
|
289 |
-
<td>1309</td>
|
290 |
-
</tr>
|
291 |
-
<tr>
|
292 |
-
<th>2</th>
|
293 |
-
<td>85307.0</td>
|
294 |
-
<td>0</td>
|
295 |
-
<td>2015-08-07T23:26:12.355-04:00</td>
|
296 |
-
<td>0</td>
|
297 |
-
<td>10</td>
|
298 |
-
<td>jpg</td>
|
299 |
-
<td>208409</td>
|
300 |
-
<td>https://cdn.donmai.us/original/a1/2c/a12ce629f...</td>
|
301 |
-
<td>False</td>
|
302 |
-
<td>False</td>
|
303 |
-
<td>...</td>
|
304 |
-
<td>1</td>
|
305 |
-
<td>1boy 1girl blush boots carrying closed_eyes co...</td>
|
306 |
-
<td>yuuryuu_nagare</td>
|
307 |
-
<td>jon_(pixiv_fantasia_iii) race_(pixiv_fantasia)</td>
|
308 |
-
<td>pixiv_fantasia pixiv_fantasia_3</td>
|
309 |
-
<td>1boy 1girl blush boots carrying closed_eyes da...</td>
|
310 |
-
<td>commentary_request</td>
|
311 |
-
<td>3</td>
|
312 |
-
<td>2022-05-25T02:26:06.588-04:00</td>
|
313 |
-
<td>95963</td>
|
314 |
-
</tr>
|
315 |
-
</tbody>
|
316 |
-
</table>
|
317 |
-
</div>
|
318 |
-
|
319 |
-
|
320 |
## Dataset Creation
|
321 |
|
322 |
-
|
323 |
-
|
|
|
|
|
|
|
324 |
|
325 |
-
|
326 |
|
327 |
```python
|
328 |
import pandas as pd
|
329 |
from pandarallel import pandarallel
|
330 |
|
331 |
-
# Initialize pandarallel
|
332 |
pandarallel.initialize(nb_workers=4, progress_bar=True)
|
333 |
|
334 |
def flatten_dict(d, parent_key='', sep='_'):
|
335 |
-
"""
|
336 |
-
Flattens a nested dictionary.
|
337 |
-
"""
|
338 |
items = []
|
339 |
for k, v in d.items():
|
340 |
new_key = f"{parent_key}{sep}{k}" if parent_key else k
|
@@ -347,22 +226,27 @@ def flatten_dict(d, parent_key='', sep='_'):
|
|
347 |
return dict(items)
|
348 |
|
349 |
def extract_all_illust_info(json_content):
|
350 |
-
"""
|
351 |
-
Parses and flattens Danbooru JSON into a pandas Series.
|
352 |
-
"""
|
353 |
flattened_data = flatten_dict(json_content)
|
354 |
return pd.Series(flattened_data)
|
355 |
|
356 |
def dicts_to_dataframe_parallel(dicts):
|
357 |
-
"""
|
358 |
-
Converts a list of dicts to a flattened DataFrame using pandarallel.
|
359 |
-
"""
|
360 |
df = pd.DataFrame(dicts)
|
361 |
flattened_df = df.parallel_apply(lambda row: extract_all_illust_info(row.to_dict()), axis=1)
|
362 |
return flattened_df
|
363 |
```
|
364 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
365 |
|
366 |
-
|
367 |
|
368 |
-
|
|
|
141 |
size_categories:
|
142 |
- 1M<n<10M
|
143 |
---
|
|
|
144 |
# Dataset Card for Danbooru 2025 Metadata
|
145 |
|
146 |
+
**Latest Post ID**: 8,877,698 (as of February 18, 2025)
|
147 |
|
148 |
+
This repository provides a comprehensive, up-to-date metadata dump for Danbooru. The metadata was freshly scraped starting January 2, 2025, featuring more extensive tag annotations for older posts, fewer errors, and fewer unlabeled AI-generated images compared to previous scrapes.
|
149 |
|
150 |
## Dataset Details
|
151 |
|
152 |
+
**Overview**
|
153 |
+
Danbooru is a well-known imageboard focusing on anime-style artwork, hosting millions of user-submitted images with extensive tagging. This dataset offers metadata (in Parquet format) for all posts up to the specified date, including details such as tags, upload timestamps, and file properties.
|
|
|
|
|
|
|
154 |
|
155 |
+
**Key Advantages**
|
156 |
|
157 |
+
- **Consolidated Metadata**: All available metadata is contained within this single dataset, eliminating the need to merge multiple partial scrapes.
|
158 |
+
- **Improved Tag Accuracy**: Historical tag renames and additions are accurately reflected, reducing the potential mismatch or redundancy often found in older metadata dumps.
|
159 |
+
- **Less AI Noise**: Compared to many legacy scrapes, the 2025 data incorporates updated annotations and filters out many unlabeled AI-generated images.
|
160 |
|
161 |
+
## Usage
|
162 |
|
163 |
+
You can load and filter this dataset using the Hugging Face `datasets` library:
|
|
|
|
|
164 |
|
165 |
```python
|
166 |
+
from datasets import load_dataset
|
167 |
|
168 |
+
danbooru_metadata = load_dataset("trojblue/danbooru2025-metadata", split="train")
|
169 |
+
df = danbooru_metadata.to_pandas()
|
170 |
```
|
171 |
|
172 |
+
This metadata can be used for research, indexing, or as a foundation for building image-based machine learning pipelines. However, please be mindful of any copyright, content, or platform-specific policies.
|
173 |
|
174 |
## Dataset Structure
|
175 |
|
176 |
+
The metadata schema is closely aligned with Danbooru’s JSON structure, ensuring familiarity for those who have used other Danbooru scrapes. Below are the main columns:
|
177 |
|
178 |
```
|
179 |
+
Index([
|
180 |
+
'approver_id', 'bit_flags', 'created_at', 'down_score', 'fav_count',
|
181 |
+
'file_ext', 'file_size', 'file_url', 'has_active_children', 'has_children',
|
182 |
+
'has_large', 'has_visible_children', 'id', 'image_height', 'image_width',
|
183 |
+
'is_banned', 'is_deleted', 'is_flagged', 'is_pending', 'large_file_url',
|
184 |
+
'last_comment_bumped_at', 'last_commented_at', 'last_noted_at', 'md5',
|
185 |
+
'media_asset_created_at', 'media_asset_duration', 'media_asset_file_ext',
|
186 |
+
'media_asset_file_key', 'media_asset_file_size', 'media_asset_id',
|
187 |
+
'media_asset_image_height', 'media_asset_image_width',
|
188 |
+
'media_asset_is_public', 'media_asset_md5', 'media_asset_pixel_hash',
|
189 |
+
'media_asset_status', 'media_asset_updated_at', 'media_asset_variants',
|
190 |
+
'parent_id', 'pixiv_id', 'preview_file_url', 'rating', 'score', 'source',
|
191 |
+
'tag_count', 'tag_count_artist', 'tag_count_character',
|
192 |
+
'tag_count_copyright', 'tag_count_general', 'tag_count_meta', 'tag_string',
|
193 |
+
'tag_string_artist', 'tag_string_character', 'tag_string_copyright',
|
194 |
+
'tag_string_general', 'tag_string_meta', 'up_score', 'updated_at',
|
195 |
+
'uploader_id'
|
196 |
+
], dtype='object')
|
197 |
```
|
198 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
199 |
## Dataset Creation
|
200 |
|
201 |
+
**Scraping Process**
|
202 |
+
|
203 |
+
- Post IDs from 1 to the latest ID (8,877,698) were retrieved using a distributed scraping approach.
|
204 |
+
- Certain restricted tags (e.g., `loli`) are inaccessible without special permissions and are therefore absent in this dataset.
|
205 |
+
- If you require more comprehensive metadata (including hidden or restricted tags), consider merging this data with older scrapes such as Danbooru2021.
|
206 |
|
207 |
+
Below is a simplified example of how the raw JSON was converted into a flattened Parquet file:
|
208 |
|
209 |
```python
|
210 |
import pandas as pd
|
211 |
from pandarallel import pandarallel
|
212 |
|
|
|
213 |
pandarallel.initialize(nb_workers=4, progress_bar=True)
|
214 |
|
215 |
def flatten_dict(d, parent_key='', sep='_'):
|
216 |
+
"""Recursively flattens a nested dictionary."""
|
|
|
|
|
217 |
items = []
|
218 |
for k, v in d.items():
|
219 |
new_key = f"{parent_key}{sep}{k}" if parent_key else k
|
|
|
226 |
return dict(items)
|
227 |
|
228 |
def extract_all_illust_info(json_content):
|
229 |
+
"""Parses and flattens Danbooru JSON into a pandas Series."""
|
|
|
|
|
230 |
flattened_data = flatten_dict(json_content)
|
231 |
return pd.Series(flattened_data)
|
232 |
|
233 |
def dicts_to_dataframe_parallel(dicts):
|
234 |
+
"""Converts a list of dicts to a flattened DataFrame using pandarallel."""
|
|
|
|
|
235 |
df = pd.DataFrame(dicts)
|
236 |
flattened_df = df.parallel_apply(lambda row: extract_all_illust_info(row.to_dict()), axis=1)
|
237 |
return flattened_df
|
238 |
```
|
239 |
|
240 |
+
## Considerations & Recommendations
|
241 |
+
|
242 |
+
- **Adult/NSFW Content**: Danbooru includes adult imagery and explicit tags. Exercise caution, especially if sharing or using this data in public-facing contexts.
|
243 |
+
- **Licensing & Copyright**: Images referenced by this metadata may be copyrighted. Refer to Danbooru’s Terms of Service and respect artists’ rights.
|
244 |
+
- **Potential Bias**: Tags are community-curated and can reflect the inherent biases of the user base (e.g., under- or over-tagging certain categories).
|
245 |
+
- **Missing or Restricted Tags**: Some tags require special permissions on Danbooru; hence they do not appear in this dataset.
|
246 |
+
|
247 |
+
For further integration of historical data, consider merging with previous Danbooru scrapes.
|
248 |
+
If you use this dataset in research or production, please cite appropriately and abide by all relevant terms and conditions.
|
249 |
|
250 |
+
------
|
251 |
|
252 |
+
*Last Updated: February 18, 2025*
|