Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Size:
100M - 1B
ArXiv:
Tags:
language-modeling
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,203 +1,132 @@
|
|
1 |
-
---
|
2 |
-
license: odc-by
|
3 |
-
|
4 |
-
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
- name: text
|
134 |
-
dtype: string
|
135 |
-
- name: dump
|
136 |
-
dtype: string
|
137 |
-
- name: timestamp
|
138 |
-
dtype: timestamp[s]
|
139 |
-
- name: url
|
140 |
-
dtype: string
|
141 |
-
- name: warc-record-id
|
142 |
-
dtype: string
|
143 |
-
splits:
|
144 |
-
- name: train
|
145 |
-
num_bytes: 96258231250
|
146 |
-
num_examples: 10881007
|
147 |
-
download_size: 34180063709
|
148 |
-
dataset_size: 96258231250
|
149 |
-
- config_name: th
|
150 |
-
features:
|
151 |
-
- name: text
|
152 |
-
dtype: string
|
153 |
-
- name: dump
|
154 |
-
dtype: string
|
155 |
-
- name: timestamp
|
156 |
-
dtype: timestamp[s]
|
157 |
-
- name: url
|
158 |
-
dtype: string
|
159 |
-
- name: warc-record-id
|
160 |
-
dtype: string
|
161 |
-
splits:
|
162 |
-
- name: train
|
163 |
-
num_bytes: 53339029052
|
164 |
-
num_examples: 16428048
|
165 |
-
download_size: 22179806184
|
166 |
-
dataset_size: 53339029052
|
167 |
-
- config_name: tl
|
168 |
-
features:
|
169 |
-
- name: text
|
170 |
-
dtype: string
|
171 |
-
- name: dump
|
172 |
-
dtype: string
|
173 |
-
- name: timestamp
|
174 |
-
dtype: timestamp[s]
|
175 |
-
- name: url
|
176 |
-
dtype: string
|
177 |
-
- name: warc-record-id
|
178 |
-
dtype: string
|
179 |
-
splits:
|
180 |
-
- name: train
|
181 |
-
num_bytes: 8472733460
|
182 |
-
num_examples: 4584295
|
183 |
-
download_size: 5016764072
|
184 |
-
dataset_size: 8472733460
|
185 |
-
- config_name: vi
|
186 |
-
features:
|
187 |
-
- name: text
|
188 |
-
dtype: string
|
189 |
-
- name: dump
|
190 |
-
dtype: string
|
191 |
-
- name: timestamp
|
192 |
-
dtype: timestamp[s]
|
193 |
-
- name: url
|
194 |
-
dtype: string
|
195 |
-
- name: warc-record-id
|
196 |
-
dtype: string
|
197 |
-
splits:
|
198 |
-
- name: train
|
199 |
-
num_bytes: 252889304513
|
200 |
-
num_examples: 55485533
|
201 |
-
download_size: 132310959906
|
202 |
-
dataset_size: 252889304513
|
203 |
-
---
|
|
|
1 |
+
---
|
2 |
+
license: odc-by
|
3 |
+
task_categories:
|
4 |
+
- text-generation
|
5 |
+
language:
|
6 |
+
- id
|
7 |
+
- vi
|
8 |
+
- th
|
9 |
+
- ta
|
10 |
+
- tl
|
11 |
+
- ms
|
12 |
+
- my
|
13 |
+
- km
|
14 |
+
- lo
|
15 |
+
tags:
|
16 |
+
- language-modeling
|
17 |
+
size_categories:
|
18 |
+
- 100B<n<1T
|
19 |
+
---
|
20 |
+
|
21 |
+
# SEA-PILE v2
|
22 |
+
|
23 |
+
SEA-PILE v2 is a large, multilingual language modelling dataset of 120 billion tokens, sourced from a diverse array of web content.
|
24 |
+
|
25 |
+
**Languages supported:** Vietnamese, Bahasa Indonesia, Tamil, Malay, Thai, Tagalog, Khmer, Lao, Burmese
|
26 |
+
|
27 |
+
## Summary Statistics
|
28 |
+
|
29 |
+
The total number of tokens in the dataset has been calculated using the Gemma3 tokenizer
|
30 |
+
| **Language** | **ISO 639-1 Code** | **Total Number of Tokens (Billions)** | **Percentage** |
|
31 |
+
|--|:--:|:--:|:--:|
|
32 |
+
| Vietnamese | vi | 51.4 | 42.13% |
|
33 |
+
| Bahasa Indonesia | id | 41.9 | 34.34% |
|
34 |
+
| Tamil | ta | 9.3 | 7.62% |
|
35 |
+
| Malay | ms | 9.3 | 7.62% |
|
36 |
+
| Thai | th | 6.5 | 5.33% |
|
37 |
+
| Tagalog | tl | 2.2 | 1.80% |
|
38 |
+
| Khmer | km | 0.6 | 0.49% |
|
39 |
+
| Lao | lo | 0.6 | 0.49% |
|
40 |
+
| Burmese | my | 0.2 | 0.16% |
|
41 |
+
|
42 |
+
`Please note that we are currently releasing only a portion of the dataset, with plans for future expansions. These expansions will primarily involve adding more tokens for our Southeast Asian languages and incorporating additional languages, including Javanese and Sundanese.`
|
43 |
+
|
44 |
+
## Data Pipeline
|
45 |
+
|
46 |
+
This dataset was created by extracting text from 24 CommonCrawl snapshots, ranging from CC-MAIN-2020-45 to CC-MAIN-2024-18. To ensure uniqueness, we implemented a deduplication process within each snapshot, following the strategy outlined by CCNet. Additionally, we also applied heuristic quality filters and perplexity scoring, drawing on methodologies from Sailor and RedPajamav2. These techniques were developed in collaboration with native speakers to ensure cultural nuances are accurately captured.
|
47 |
+
|
48 |
+
## Download
|
49 |
+
|
50 |
+
To load this data using HuggingFace's `datasets` library, you can use the following code:
|
51 |
+
```python
|
52 |
+
from datasets import load_dataset
|
53 |
+
|
54 |
+
seapilev2 = load_dataset("aisingapore/sea-pile-v2", "<ISO-639-1 code>")
|
55 |
+
```
|
56 |
+
For example, if you would like to download the Vietnamese data, you can specify the ISO 639-1 code (`vi`) as shown below:
|
57 |
+
```python
|
58 |
+
from datasets import load_dataset
|
59 |
+
|
60 |
+
seapilev2_vi = load_dataset("aisingapore/sea-pile-v2", "vi")
|
61 |
+
```
|
62 |
+
If you wish to download all available datasets at once, you can use the following approach:
|
63 |
+
```python
|
64 |
+
languages = ['vi', 'id', 'ta', 'ms', 'th', 'tl', 'km', 'lo', 'my']
|
65 |
+
|
66 |
+
seapilev2 = {}
|
67 |
+
for language in languages:
|
68 |
+
seapilev2[language] = load_dataset('aisingapore/sea-pile-v2', language)
|
69 |
+
```
|
70 |
+
|
71 |
+
|
72 |
+
## Limitations
|
73 |
+
|
74 |
+
Despite our best efforts to filter out undesirable (i.e NSFW, toxic and biased) content and personally identifiable information (PII), there is a possibility that some documents containing harmful, toxic, or private content may still pass through our pipeline. We are committed to continuously improving our filtering processes to minimize these occurrences.
|
75 |
+
|
76 |
+
## License
|
77 |
+
|
78 |
+
This dataset is made available under [ODC-By 1.0](https://opendatacommons.org/licenses/by/1-0/) license; users should also abide by the [CommonCrawl ToU](https://commoncrawl.org/terms-of-use/).
|
79 |
+
|
80 |
+
## Bibtex
|
81 |
+
|
82 |
+
If you use our dataset, please cite us at:
|
83 |
+
|
84 |
+
```bibtex
|
85 |
+
@misc{2504.05747,
|
86 |
+
Title = {SEA-LION: Southeast Asian Languages in One Network},
|
87 |
+
Author = {Raymond Ng and Thanh Ngan Nguyen and Yuli Huang and Ngee Chia Tai and Wai Yi Leong and Wei Qi Leong and Xianbin Yong and Jian Gang Ngui and Yosephine Susanto and Nicholas Cheng and Hamsawardhini Rengarajan and Peerat Limkonchotiwat and Adithya Venkatadri Hulagadri and Kok Wai Teng and Yeo Yeow Tong and Bryan Siow and Wei Yi Teo and Wayne Lau and Choon Meng Tan and Brandon Ong and Zhi Hao Ong and Jann Railey Montalan and Adwin Chan and Sajeban Antonyrex and Ren Lee and Esther Choa and David Ong Tat-Wee and Bing Jie Darius Liu and William Chandra Tjhi and Erik Cambria and Leslie Teo},
|
88 |
+
Year = {2025},
|
89 |
+
Eprint = {arXiv:2504.05747},
|
90 |
+
}
|
91 |
+
```
|
92 |
+
|
93 |
+
## References
|
94 |
+
|
95 |
+
```bibtex
|
96 |
+
@inproceedings{wenzek2020ccnet,
|
97 |
+
title={CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data},
|
98 |
+
author={Wenzek, Guillaume and Lachaux, Marie-Anne and Conneau, Alexis and Chaudhary, Vishrav and Guzm{\'a}n, Francisco and Joulin, Armand and Grave, {\'E}douard},
|
99 |
+
booktitle={Proceedings of The 12th Language Resources and Evaluation Conference},
|
100 |
+
pages={4003--4012},
|
101 |
+
year={2020}
|
102 |
+
}
|
103 |
+
|
104 |
+
@inproceedings{sailor1report,
|
105 |
+
title = "Sailor: Open Language Models for South-{E}ast {A}sia",
|
106 |
+
author = "Dou, Longxu and Liu, Qian and Zeng, Guangtao and Guo, Jia and Zhou, Jiahui and Mao, Xin and Jin, Ziqi and Lu, Wei and Lin, Min",
|
107 |
+
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
|
108 |
+
year = "2024",
|
109 |
+
}
|
110 |
+
|
111 |
+
@article{weber2024redpajamaopendatasettraining,
|
112 |
+
title={RedPajama: an Open Dataset for Training Large Language Models},
|
113 |
+
author={Maurice Weber and Daniel Fu and Quentin Anthony and Yonatan Oren and Shane Adams and Anton Alexandrov and Xiaozhong Lyu and Huu Nguyen and Xiaozhe Yao and Virginia Adams and Ben Athiwaratkun and Rahul Chalamala and Kezhen Chen and Max Ryabinin and Tri Dao and Percy Liang and Christopher Ré and Irina Rish and Ce Zhang},
|
114 |
+
year={2024},
|
115 |
+
eprint={2411.12372},
|
116 |
+
archivePrefix={arXiv},
|
117 |
+
primaryClass={cs.CL},
|
118 |
+
url={https://arxiv.org/abs/2411.12372},
|
119 |
+
}
|
120 |
+
```
|
121 |
+
## The Team
|
122 |
+
Chan Adwin, Cheng Nicholas, Choa Esther, Huang Yuli, Hulagadri Adithya Venkatadri, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limkonchotiwat Peerat, Liu Bing Jie Darius, Montalan Jann Railey, Ng Boon Cheong Raymond, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Brandon, Ong Tat-Wee David, Ong Zhi Hao, Rengarajan Hamsawardhini, Siow Bryan, Susanto Yosephine, Tai Ngee Chia, Tan Choon Meng, Teng Walter, Teo Eng Sipp Leslie, Teo Wei Yi, Tjhi William, Yeo Yeow Tong, Yong Xianbin
|
123 |
+
|
124 |
+
**Native speakers**
|
125 |
+
|
126 |
+
Our special thanks to the native speakers who helped us build the dataset:
|
127 |
+
|
128 |
+
Wai Yan Paing Andy (Burmese), David Macalintal (Tagalog), Ye Phone Myat (Burmese), Thamudaya Win Berry (Burmese), Sri Sowndarya Elango (Tamil), Sneha Ramakrishnan (Tamil), Chanrichnyneath Kim (Khmer), Nurul Ashikin (Malay), Muhammad Syazwan Bin Adzhar (Malay), Kanruethai Masuk (Lao), Mohamed Jasim (Tamil)
|
129 |
+
|
130 |
+
## Acknowledgements
|
131 |
+
|
132 |
+
AI Singapore is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|