File size: 10,319 Bytes
c6a53ac
 
 
29506df
 
c6a53ac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29506df
 
c6a53ac
 
29506df
 
 
 
c6a53ac
 
 
 
 
f974dff
 
 
 
 
 
 
 
 
 
c6a53ac
f974dff
 
 
a359424
633a9df
3e77983
f974dff
 
 
 
a359424
f974dff
 
633a9df
 
 
 
 
f974dff
 
 
 
617bafb
 
 
 
 
 
 
 
f974dff
617bafb
f974dff
 
3e77983
f974dff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92eccb2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f974dff
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
---
dataset_info:
  features:
  - name: file_url
    dtype: string
  - name: approver_id
    dtype: float64
  - name: bit_flags
    dtype: int64
  - name: created_at
    dtype: string
  - name: down_score
    dtype: int64
  - name: fav_count
    dtype: int64
  - name: file_ext
    dtype: string
  - name: file_size
    dtype: int64
  - name: has_active_children
    dtype: bool
  - name: has_children
    dtype: bool
  - name: has_large
    dtype: bool
  - name: has_visible_children
    dtype: bool
  - name: image_height
    dtype: int64
  - name: image_width
    dtype: int64
  - name: is_banned
    dtype: bool
  - name: is_deleted
    dtype: bool
  - name: is_flagged
    dtype: bool
  - name: is_pending
    dtype: bool
  - name: large_file_url
    dtype: string
  - name: last_comment_bumped_at
    dtype: string
  - name: last_commented_at
    dtype: string
  - name: last_noted_at
    dtype: string
  - name: md5
    dtype: string
  - name: media_asset_created_at
    dtype: string
  - name: media_asset_duration
    dtype: float64
  - name: media_asset_file_ext
    dtype: string
  - name: media_asset_file_key
    dtype: string
  - name: media_asset_file_size
    dtype: int64
  - name: media_asset_id
    dtype: int64
  - name: media_asset_image_height
    dtype: int64
  - name: media_asset_image_width
    dtype: int64
  - name: media_asset_is_public
    dtype: bool
  - name: media_asset_md5
    dtype: string
  - name: media_asset_pixel_hash
    dtype: string
  - name: media_asset_status
    dtype: string
  - name: media_asset_updated_at
    dtype: string
  - name: media_asset_variants
    dtype: string
  - name: parent_id
    dtype: float64
  - name: pixiv_id
    dtype: float64
  - name: preview_file_url
    dtype: string
  - name: rating
    dtype: string
  - name: score
    dtype: int64
  - name: source
    dtype: string
  - name: tag_count
    dtype: int64
  - name: tag_count_artist
    dtype: int64
  - name: tag_count_character
    dtype: int64
  - name: tag_count_copyright
    dtype: int64
  - name: tag_count_general
    dtype: int64
  - name: tag_count_meta
    dtype: int64
  - name: tag_string
    dtype: string
  - name: tag_string_artist
    dtype: string
  - name: tag_string_character
    dtype: string
  - name: tag_string_copyright
    dtype: string
  - name: tag_string_general
    dtype: string
  - name: tag_string_meta
    dtype: string
  - name: up_score
    dtype: int64
  - name: updated_at
    dtype: string
  - name: uploader_id
    dtype: int64
  - name: id
    dtype: int64
  splits:
  - name: train
    num_bytes: 20592115226
    num_examples: 8835689
  download_size: 7359040645
  dataset_size: 20592115226
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: mit
task_categories:
- text-to-image
- image-classification
language:
- en
- ja
pretty_name: Danbooru 2025 Metadata
size_categories:
- 1M<n<10M
---

# Dataset Card for Danbooru 2025 Metadata

**Current max id: `8,877,698` (Feb.18, 2025)**

This dataset repo provides comprehensive, up-to-date metadata for the Danbooru booru site. All metadata was freshly scraped starting on January 2, 2025, resulting in more extensive tag annotations for older posts, fewer errors, and reduced occurrences of non-labelled AI-generated images in the data.

## Dataset Details

**What is this?**  
A refreshed, Parquet-formatted metadata dump of Danbooru, current as of Feb.18, 2025.


**Why choose this over other Danbooru scrapes?**

The dataset includes all available metadata in one place, eliminating the need to gather and merge data from multiple sources manually.

It features more annotations and fewer untagged or mislabeled AI-generated images than older scrapes. Additionally, historical tag renames and additions are accurately reflected, ensuring easier and more reliable downstream use.


## Uses

The dataset can be loaded or filtered with the Huggingface `datasets` library:

```python
from datasets import Dataset, load_dataset

danbooru_dataset = load_dataset("trojblue/danbooru2025-metadata", split="train")
df = danbooru_dataset.to_pandas()
```


## Dataset Structure

For better compatability, the columns are converted from danbooru jsons with minimal change:

```
Index(['approver_id', 'bit_flags', 'created_at', 'down_score', 'fav_count',
       'file_ext', 'file_size', 'file_url', 'has_active_children',
       'has_children', 'has_large', 'has_visible_children', 'id',
       'image_height', 'image_width', 'is_banned', 'is_deleted', 'is_flagged',
       'is_pending', 'large_file_url', 'last_comment_bumped_at',
       'last_commented_at', 'last_noted_at', 'md5', 'media_asset_created_at',
       'media_asset_duration', 'media_asset_file_ext', 'media_asset_file_key',
       'media_asset_file_size', 'media_asset_id', 'media_asset_image_height',
       'media_asset_image_width', 'media_asset_is_public', 'media_asset_md5',
       'media_asset_pixel_hash', 'media_asset_status',
       'media_asset_updated_at', 'media_asset_variants', 'parent_id',
       'pixiv_id', 'preview_file_url', 'rating', 'score', 'source',
       'tag_count', 'tag_count_artist', 'tag_count_character',
       'tag_count_copyright', 'tag_count_general', 'tag_count_meta',
       'tag_string', 'tag_string_artist', 'tag_string_character',
       'tag_string_copyright', 'tag_string_general', 'tag_string_meta',
       'up_score', 'updated_at', 'uploader_id'],
      dtype='object')
```

<div>
<style scoped>
    .dataframe tbody tr th:only-of-type {
        vertical-align: middle;
    }

    .dataframe tbody tr th {
        vertical-align: top;
    }

    .dataframe thead th {
        text-align: right;
    }
</style>
<table border="1" class="dataframe">
  <thead>
    <tr style="text-align: right;">
      <th></th>
      <th>approver_id</th>
      <th>bit_flags</th>
      <th>created_at</th>
      <th>down_score</th>
      <th>fav_count</th>
      <th>file_ext</th>
      <th>file_size</th>
      <th>file_url</th>
      <th>has_active_children</th>
      <th>has_children</th>
      <th>...</th>
      <th>tag_count_meta</th>
      <th>tag_string</th>
      <th>tag_string_artist</th>
      <th>tag_string_character</th>
      <th>tag_string_copyright</th>
      <th>tag_string_general</th>
      <th>tag_string_meta</th>
      <th>up_score</th>
      <th>updated_at</th>
      <th>uploader_id</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <th>0</th>
      <td>NaN</td>
      <td>0</td>
      <td>2015-08-07T23:23:45.072-04:00</td>
      <td>0</td>
      <td>66</td>
      <td>jpg</td>
      <td>4134797</td>
      <td>https://cdn.donmai.us/original/a1/b3/a1b3d0fa9...</td>
      <td>False</td>
      <td>False</td>
      <td>...</td>
      <td>3</td>
      <td>1girl absurdres ass bangle bikini black_bikini...</td>
      <td>kyouka.</td>
      <td>marie_(splatoon)</td>
      <td>splatoon_(series) splatoon_1</td>
      <td>1girl ass bangle bikini black_bikini blush bra...</td>
      <td>absurdres commentary_request highres</td>
      <td>15</td>
      <td>2024-06-25T15:32:44.291-04:00</td>
      <td>420773</td>
    </tr>
    <tr>
      <th>1</th>
      <td>NaN</td>
      <td>0</td>
      <td>2008-03-05T01:52:28.194-05:00</td>
      <td>0</td>
      <td>7</td>
      <td>jpg</td>
      <td>380323</td>
      <td>https://cdn.donmai.us/original/d6/10/d6107a13b...</td>
      <td>False</td>
      <td>False</td>
      <td>...</td>
      <td>2</td>
      <td>1girl aqua_hair bad_id bad_pixiv_id guitar hat...</td>
      <td>shimeko</td>
      <td>hatsune_miku</td>
      <td>vocaloid</td>
      <td>1girl aqua_hair guitar instrument long_hair so...</td>
      <td>bad_id bad_pixiv_id</td>
      <td>4</td>
      <td>2018-01-23T00:32:10.080-05:00</td>
      <td>1309</td>
    </tr>
    <tr>
      <th>2</th>
      <td>85307.0</td>
      <td>0</td>
      <td>2015-08-07T23:26:12.355-04:00</td>
      <td>0</td>
      <td>10</td>
      <td>jpg</td>
      <td>208409</td>
      <td>https://cdn.donmai.us/original/a1/2c/a12ce629f...</td>
      <td>False</td>
      <td>False</td>
      <td>...</td>
      <td>1</td>
      <td>1boy 1girl blush boots carrying closed_eyes co...</td>
      <td>yuuryuu_nagare</td>
      <td>jon_(pixiv_fantasia_iii) race_(pixiv_fantasia)</td>
      <td>pixiv_fantasia pixiv_fantasia_3</td>
      <td>1boy 1girl blush boots carrying closed_eyes da...</td>
      <td>commentary_request</td>
      <td>3</td>
      <td>2022-05-25T02:26:06.588-04:00</td>
      <td>95963</td>
    </tr>
  </tbody>
</table>
</div>


## Dataset Creation

We scraped all post IDs on Danbooru from 1 up to the latest. Some restricted tags (e.g. `loli`) were hidden by the site and require a gold account to access, so they are not present.  
For a more complete (but older) metadata reference, you may wish to combine this with Danbooru2021 or similar previous scrapes.

The scraping process used a pool of roughly 400 IPs over six hours, ensuring consistent tag definitions. Below is a simplified example of the process used to convert the metadata into Parquet:

```python
import pandas as pd
from pandarallel import pandarallel

# Initialize pandarallel
pandarallel.initialize(nb_workers=4, progress_bar=True)

def flatten_dict(d, parent_key='', sep='_'):
    """
    Flattens a nested dictionary.
    """
    items = []
    for k, v in d.items():
        new_key = f"{parent_key}{sep}{k}" if parent_key else k
        if isinstance(v, dict):
            items.extend(flatten_dict(v, new_key, sep=sep).items())
        elif isinstance(v, list):
            items.append((new_key, ', '.join(map(str, v))))
        else:
            items.append((new_key, v))
    return dict(items)

def extract_all_illust_info(json_content):
    """
    Parses and flattens Danbooru JSON into a pandas Series.
    """
    flattened_data = flatten_dict(json_content)
    return pd.Series(flattened_data)

def dicts_to_dataframe_parallel(dicts):
    """
    Converts a list of dicts to a flattened DataFrame using pandarallel.
    """
    df = pd.DataFrame(dicts)
    flattened_df = df.parallel_apply(lambda row: extract_all_illust_info(row.to_dict()), axis=1)
    return flattened_df
```


### Recommendations

Users should be aware of potential biases and limitations, including the presence of adult content in some tags. More details and mitigations may be needed.