Update README.md
Browse files
README.md
CHANGED
@@ -1,46 +1,45 @@
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
3 |
task_categories:
|
4 |
-
- text-retrieval
|
5 |
-
-
|
6 |
-
-
|
7 |
language:
|
8 |
-
- ja
|
9 |
-
- en
|
10 |
-
pretty_name: TakaraSpider Japanese Web Crawl Dataset
|
11 |
size_categories:
|
12 |
-
- 100K<n<1M
|
13 |
tags:
|
14 |
-
- web-crawl
|
15 |
-
- japanese
|
16 |
-
- multilingual
|
17 |
-
- html
|
18 |
-
- text-extraction
|
19 |
-
- nlp
|
20 |
-
- cross-cultural
|
21 |
dataset_info:
|
22 |
features:
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
- name: train
|
35 |
-
num_bytes: 47700748411
|
36 |
-
num_examples: 257900
|
37 |
-
download_size: 14798581589
|
38 |
-
dataset_size: 47700748411
|
39 |
-
configs:
|
40 |
-
- config_name: default
|
41 |
data_files:
|
42 |
-
|
43 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
---
|
45 |
|
46 |
# TakaraSpider Japanese Web Crawl Dataset
|
@@ -49,7 +48,7 @@ configs:
|
|
49 |
|
50 |
## Dataset Summary
|
51 |
|
52 |
-
TakaraSpider is a large-scale web crawl dataset specifically designed to capture Japanese web content alongside international sources. The dataset contains **257,900 web pages** collected through systematic crawling, with a primary focus on Japanese language content (
|
53 |
|
54 |
The dataset was generated by the TakaraSpider crawler, which was specifically engineered to capture high-quality Japanese web content while maintaining broad international coverage.
|
55 |
|
@@ -66,9 +65,9 @@ The dataset was generated by the TakaraSpider crawler, which was specifically en
|
|
66 |
|
67 |
## Languages
|
68 |
|
69 |
-
- **Japanese (ja)**:
|
70 |
-
- **English (en)**: 5.
|
71 |
-
- **Other/Unknown**:
|
72 |
|
73 |

|
74 |
|
@@ -91,13 +90,13 @@ The dataset was generated by the TakaraSpider crawler, which was specifically en
|
|
91 |
- **`crawl_id`** (string): Unique identifier for each crawl session
|
92 |
- **`timestamp`** (timestamp): ISO 8601 formatted crawl timestamp with timezone
|
93 |
- **`url`** (string): Target URL that was crawled
|
94 |
-
- **`source_url`** (string): Referring/source URL (when available)
|
95 |
- **`html`** (string): Complete raw HTML content of the page
|
96 |
|
97 |
### Data Splits
|
98 |
|
99 |
| Split | Examples |
|
100 |
-
|
101 |
| train | 257,900 |
|
102 |
|
103 |
## Dataset Creation
|
@@ -122,12 +121,13 @@ The data was collected through systematic web crawling using the TakaraSpider cr
|
|
122 |
|
123 |
- Prioritize Japanese (.jp) domains while maintaining international diversity
|
124 |
- Capture complete HTML content with metadata
|
125 |
-
- Ensure broad domain coverage (
|
126 |
- Maintain crawl provenance through unique session IDs
|
127 |
|
128 |
#### Who are the source language producers?
|
129 |
|
130 |
The source content represents natural web usage across:
|
|
|
131 |
- **Japanese web users**: Content creators, bloggers, businesses, news organizations
|
132 |
- **International web users**: Global content accessible to Japanese audiences
|
133 |
- **Mixed demographics**: Spanning individual users to large organizations
|
@@ -137,12 +137,14 @@ The source content represents natural web usage across:
|
|
137 |
### Social Impact of Dataset
|
138 |
|
139 |
**Positive Impacts:**
|
|
|
140 |
- Enables Japanese NLP research and development
|
141 |
-
- Supports cross-cultural digital humanities research
|
142 |
- Facilitates web technology development and benchmarking
|
143 |
- Promotes understanding of Japanese digital culture
|
144 |
|
145 |
**Potential Concerns:**
|
|
|
146 |
- May contain biased content reflecting web demographics
|
147 |
- Temporal snapshot may not represent evolving web trends
|
148 |
- Domain concentration could skew research findings
|
@@ -153,13 +155,14 @@ The source content represents natural web usage across:
|
|
153 |
|
154 |
**Identified Biases:**
|
155 |
|
156 |
-
1. **Geographic Bias**:
|
157 |
2. **Temporal Bias**: Single-day crawl (June 13, 2025) captures specific moment in time
|
158 |
-
3. **Domain Concentration**: Top 10 domains represent
|
159 |
-
4. **Language Detection**:
|
160 |
-
5. **Content Type Skew**: Structured webpages (
|
161 |
|
162 |
**Mitigation Strategies:**
|
|
|
163 |
- Clearly document dataset composition and limitations
|
164 |
- Encourage diverse evaluation across content types
|
165 |
- Recommend supplementary datasets for global research
|
@@ -185,6 +188,7 @@ The source content represents natural web usage across:
|
|
185 |
### Licensing Information
|
186 |
|
187 |
This dataset is released under the **Creative Commons Attribution 4.0 International License (CC-BY-4.0)**. Users are free to:
|
|
|
188 |
- Share and redistribute the material
|
189 |
- Adapt, remix, transform, and build upon the material
|
190 |
- Use for any purpose, including commercial applications
|
@@ -218,13 +222,13 @@ Thanks to [@takarajordan](https://huggingface.co/takarajordan) for creating and
|
|
218 |
|
219 |
### Data Quality Metrics
|
220 |
|
221 |
-
| Metric
|
222 |
-
|
223 |
-
| **Duplicate URLs**
|
224 |
-
| **Content Completeness**
|
225 |
-
| **Metadata Completeness** | 100%
|
226 |
-
| **Average Content Size**
|
227 |
-
| **Domain Diversity**
|
228 |
|
229 |
## Getting Started
|
230 |
|
@@ -236,7 +240,7 @@ from datasets import load_dataset
|
|
236 |
# Load the full dataset
|
237 |
dataset = load_dataset("takarajordan/takaraspider")
|
238 |
|
239 |
-
# Or stream for memory efficiency
|
240 |
dataset = load_dataset("takarajordan/takaraspider", streaming=True)
|
241 |
|
242 |
# Sample for testing
|
@@ -266,11 +270,11 @@ domains = [urlparse(url).netloc for url in dataset["train"]['url']]
|
|
266 |
Complete analytics and visualizations are available in the `analytics_output/` directory:
|
267 |
|
268 |
- **Domain Distribution**: Top domains by page count
|
269 |
-
- **Geographic Analysis**: TLD-based geographic distribution
|
270 |
- **Content Analysis**: Size distribution and content types
|
271 |
- **Language Breakdown**: Detailed language detection results
|
272 |
- **URL Structure**: Path depth and navigation patterns
|
273 |
|
274 |
---
|
275 |
|
276 |
-
|
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
3 |
task_categories:
|
4 |
+
- text-retrieval
|
5 |
+
- text-classification
|
6 |
+
- feature-extraction
|
7 |
language:
|
8 |
+
- ja
|
9 |
+
- en
|
10 |
+
pretty_name: "TakaraSpider Japanese Web Crawl Dataset"
|
11 |
size_categories:
|
12 |
+
- 100K<n<1M
|
13 |
tags:
|
14 |
+
- web-crawl
|
15 |
+
- japanese
|
16 |
+
- multilingual
|
17 |
+
- html
|
18 |
+
- text-extraction
|
19 |
+
- nlp
|
20 |
+
- cross-cultural
|
21 |
dataset_info:
|
22 |
features:
|
23 |
+
- name: crawl_id
|
24 |
+
dtype: string
|
25 |
+
- name: timestamp
|
26 |
+
dtype: timestamp[ns, tz=UTC]
|
27 |
+
- name: url
|
28 |
+
dtype: string
|
29 |
+
- name: source_url
|
30 |
+
dtype: string
|
31 |
+
- name: html
|
32 |
+
dtype: string
|
33 |
+
config_name: default
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
data_files:
|
35 |
+
- split: train
|
36 |
+
path: "data/train-*"
|
37 |
+
default: true
|
38 |
+
configs:
|
39 |
+
- config_name: default
|
40 |
+
data_files:
|
41 |
+
- split: train
|
42 |
+
path: "data/train-*"
|
43 |
---
|
44 |
|
45 |
# TakaraSpider Japanese Web Crawl Dataset
|
|
|
48 |
|
49 |
## Dataset Summary
|
50 |
|
51 |
+
TakaraSpider is a large-scale web crawl dataset specifically designed to capture Japanese web content alongside international sources. The dataset contains **257,900 web pages** collected through systematic crawling, with a primary focus on Japanese language content (78.5%) while maintaining substantial international representation (21.5%). This makes it ideal for Japanese-English comparative studies, cross-cultural web analysis, and multilingual NLP research.
|
52 |
|
53 |
The dataset was generated by the TakaraSpider crawler, which was specifically engineered to capture high-quality Japanese web content while maintaining broad international coverage.
|
54 |
|
|
|
65 |
|
66 |
## Languages
|
67 |
|
68 |
+
- **Japanese (ja)**: 78.5% of content - Primary focus with rich representation
|
69 |
+
- **English (en)**: 5.3% of content - International perspective
|
70 |
+
- **Other/Unknown**: 16.2% of content - Diverse multilingual representation
|
71 |
|
72 |

|
73 |
|
|
|
90 |
- **`crawl_id`** (string): Unique identifier for each crawl session
|
91 |
- **`timestamp`** (timestamp): ISO 8601 formatted crawl timestamp with timezone
|
92 |
- **`url`** (string): Target URL that was crawled
|
93 |
+
- **`source_url`** (string): Referring/source URL (when available)
|
94 |
- **`html`** (string): Complete raw HTML content of the page
|
95 |
|
96 |
### Data Splits
|
97 |
|
98 |
| Split | Examples |
|
99 |
+
| ----- | -------- |
|
100 |
| train | 257,900 |
|
101 |
|
102 |
## Dataset Creation
|
|
|
121 |
|
122 |
- Prioritize Japanese (.jp) domains while maintaining international diversity
|
123 |
- Capture complete HTML content with metadata
|
124 |
+
- Ensure broad domain coverage (10,590+ unique domains)
|
125 |
- Maintain crawl provenance through unique session IDs
|
126 |
|
127 |
#### Who are the source language producers?
|
128 |
|
129 |
The source content represents natural web usage across:
|
130 |
+
|
131 |
- **Japanese web users**: Content creators, bloggers, businesses, news organizations
|
132 |
- **International web users**: Global content accessible to Japanese audiences
|
133 |
- **Mixed demographics**: Spanning individual users to large organizations
|
|
|
137 |
### Social Impact of Dataset
|
138 |
|
139 |
**Positive Impacts:**
|
140 |
+
|
141 |
- Enables Japanese NLP research and development
|
142 |
+
- Supports cross-cultural digital humanities research
|
143 |
- Facilitates web technology development and benchmarking
|
144 |
- Promotes understanding of Japanese digital culture
|
145 |
|
146 |
**Potential Concerns:**
|
147 |
+
|
148 |
- May contain biased content reflecting web demographics
|
149 |
- Temporal snapshot may not represent evolving web trends
|
150 |
- Domain concentration could skew research findings
|
|
|
155 |
|
156 |
**Identified Biases:**
|
157 |
|
158 |
+
1. **Geographic Bias**: 50.9% Japanese domains may not represent global web diversity
|
159 |
2. **Temporal Bias**: Single-day crawl (June 13, 2025) captures specific moment in time
|
160 |
+
3. **Domain Concentration**: Top 10 domains represent 13.4% of dataset (improved diversity)
|
161 |
+
4. **Language Detection**: 15.9% of content requires language identification
|
162 |
+
5. **Content Type Skew**: Structured webpages (64.1%) over-represented
|
163 |
|
164 |
**Mitigation Strategies:**
|
165 |
+
|
166 |
- Clearly document dataset composition and limitations
|
167 |
- Encourage diverse evaluation across content types
|
168 |
- Recommend supplementary datasets for global research
|
|
|
188 |
### Licensing Information
|
189 |
|
190 |
This dataset is released under the **Creative Commons Attribution 4.0 International License (CC-BY-4.0)**. Users are free to:
|
191 |
+
|
192 |
- Share and redistribute the material
|
193 |
- Adapt, remix, transform, and build upon the material
|
194 |
- Use for any purpose, including commercial applications
|
|
|
222 |
|
223 |
### Data Quality Metrics
|
224 |
|
225 |
+
| Metric | Value | Description |
|
226 |
+
| ------------------------- | ----- | ------------------------------------------------ |
|
227 |
+
| **Duplicate URLs** | 0.0% | No duplicate URLs detected in sample |
|
228 |
+
| **Content Completeness** | 99%+ | HTML content available for virtually all records |
|
229 |
+
| **Metadata Completeness** | 100% | All required fields populated |
|
230 |
+
| **Average Content Size** | 198KB | Substantial content per page |
|
231 |
+
| **Domain Diversity** | 0.205 | Strong domain-to-page ratio |
|
232 |
|
233 |
## Getting Started
|
234 |
|
|
|
240 |
# Load the full dataset
|
241 |
dataset = load_dataset("takarajordan/takaraspider")
|
242 |
|
243 |
+
# Or stream for memory efficiency
|
244 |
dataset = load_dataset("takarajordan/takaraspider", streaming=True)
|
245 |
|
246 |
# Sample for testing
|
|
|
270 |
Complete analytics and visualizations are available in the `analytics_output/` directory:
|
271 |
|
272 |
- **Domain Distribution**: Top domains by page count
|
273 |
+
- **Geographic Analysis**: TLD-based geographic distribution
|
274 |
- **Content Analysis**: Size distribution and content types
|
275 |
- **Language Breakdown**: Detailed language detection results
|
276 |
- **URL Structure**: Path depth and navigation patterns
|
277 |
|
278 |
---
|
279 |
|
280 |
+
_This dataset card was generated using comprehensive analytics based on a 51,580-sample representative subset (20% of full dataset). Last updated: June 18, 2025._
|