Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
hamsarajan commited on
Commit
abdbc0a
·
verified ·
1 Parent(s): 912d34b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +132 -203
README.md CHANGED
@@ -1,203 +1,132 @@
1
- ---
2
- license: odc-by
3
- configs:
4
- - config_name: id
5
- data_files:
6
- - split: train
7
- path: id/train-*
8
- - config_name: km
9
- data_files:
10
- - split: train
11
- path: km/train-*
12
- - config_name: lo
13
- data_files:
14
- - split: train
15
- path: lo/train-*
16
- - config_name: ms
17
- data_files:
18
- - split: train
19
- path: ms/train-*
20
- - config_name: my
21
- data_files:
22
- - split: train
23
- path: my/train-*
24
- - config_name: ta
25
- data_files:
26
- - split: train
27
- path: ta/train-*
28
- - config_name: th
29
- data_files:
30
- - split: train
31
- path: th/train-*
32
- - config_name: tl
33
- data_files:
34
- - split: train
35
- path: tl/train-*
36
- - config_name: vi
37
- data_files:
38
- - split: train
39
- path: vi/train-*
40
- dataset_info:
41
- - config_name: id
42
- features:
43
- - name: text
44
- dtype: string
45
- - name: dump
46
- dtype: string
47
- - name: timestamp
48
- dtype: timestamp[s]
49
- - name: url
50
- dtype: string
51
- - name: warc-record-id
52
- dtype: string
53
- splits:
54
- - name: train
55
- num_bytes: 201373568363
56
- num_examples: 79767263
57
- download_size: 112906328588
58
- dataset_size: 201373568363
59
- - config_name: km
60
- features:
61
- - name: text
62
- dtype: string
63
- - name: dump
64
- dtype: string
65
- - name: timestamp
66
- dtype: timestamp[s]
67
- - name: url
68
- dtype: string
69
- - name: warc-record-id
70
- dtype: string
71
- splits:
72
- - name: train
73
- num_bytes: 3252131215
74
- num_examples: 768872
75
- download_size: 865148174
76
- dataset_size: 3252131215
77
- - config_name: lo
78
- features:
79
- - name: text
80
- dtype: string
81
- - name: dump
82
- dtype: string
83
- - name: timestamp
84
- dtype: timestamp[s]
85
- - name: url
86
- dtype: string
87
- - name: warc-record-id
88
- dtype: string
89
- splits:
90
- - name: train
91
- num_bytes: 2976448596
92
- num_examples: 645083
93
- download_size: 1115075759
94
- dataset_size: 2976448596
95
- - config_name: ms
96
- features:
97
- - name: text
98
- dtype: string
99
- - name: dump
100
- dtype: string
101
- - name: timestamp
102
- dtype: timestamp[s]
103
- - name: url
104
- dtype: string
105
- - name: warc-record-id
106
- dtype: string
107
- splits:
108
- - name: train
109
- num_bytes: 41370109070
110
- num_examples: 17954633
111
- download_size: 24641432590
112
- dataset_size: 41370109070
113
- - config_name: my
114
- features:
115
- - name: text
116
- dtype: string
117
- - name: dump
118
- dtype: string
119
- - name: timestamp
120
- dtype: timestamp[s]
121
- - name: url
122
- dtype: string
123
- - name: warc-record-id
124
- dtype: string
125
- splits:
126
- - name: train
127
- num_bytes: 1165161343
128
- num_examples: 295252
129
- download_size: 422480762
130
- dataset_size: 1165161343
131
- - config_name: ta
132
- features:
133
- - name: text
134
- dtype: string
135
- - name: dump
136
- dtype: string
137
- - name: timestamp
138
- dtype: timestamp[s]
139
- - name: url
140
- dtype: string
141
- - name: warc-record-id
142
- dtype: string
143
- splits:
144
- - name: train
145
- num_bytes: 96258231250
146
- num_examples: 10881007
147
- download_size: 34180063709
148
- dataset_size: 96258231250
149
- - config_name: th
150
- features:
151
- - name: text
152
- dtype: string
153
- - name: dump
154
- dtype: string
155
- - name: timestamp
156
- dtype: timestamp[s]
157
- - name: url
158
- dtype: string
159
- - name: warc-record-id
160
- dtype: string
161
- splits:
162
- - name: train
163
- num_bytes: 53339029052
164
- num_examples: 16428048
165
- download_size: 22179806184
166
- dataset_size: 53339029052
167
- - config_name: tl
168
- features:
169
- - name: text
170
- dtype: string
171
- - name: dump
172
- dtype: string
173
- - name: timestamp
174
- dtype: timestamp[s]
175
- - name: url
176
- dtype: string
177
- - name: warc-record-id
178
- dtype: string
179
- splits:
180
- - name: train
181
- num_bytes: 8472733460
182
- num_examples: 4584295
183
- download_size: 5016764072
184
- dataset_size: 8472733460
185
- - config_name: vi
186
- features:
187
- - name: text
188
- dtype: string
189
- - name: dump
190
- dtype: string
191
- - name: timestamp
192
- dtype: timestamp[s]
193
- - name: url
194
- dtype: string
195
- - name: warc-record-id
196
- dtype: string
197
- splits:
198
- - name: train
199
- num_bytes: 252889304513
200
- num_examples: 55485533
201
- download_size: 132310959906
202
- dataset_size: 252889304513
203
- ---
 
1
+ ---
2
+ license: odc-by
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - id
7
+ - vi
8
+ - th
9
+ - ta
10
+ - tl
11
+ - ms
12
+ - my
13
+ - km
14
+ - lo
15
+ tags:
16
+ - language-modeling
17
+ size_categories:
18
+ - 100B<n<1T
19
+ ---
20
+
21
+ # SEA-PILE v2
22
+
23
+ SEA-PILE v2 is a large, multilingual language modelling dataset of 120 billion tokens, sourced from a diverse array of web content.
24
+
25
+ **Languages supported:** Vietnamese, Bahasa Indonesia, Tamil, Malay, Thai, Tagalog, Khmer, Lao, Burmese
26
+
27
+ ## Summary Statistics
28
+
29
+ The total number of tokens in the dataset has been calculated using the Gemma3 tokenizer
30
+ | **Language** | **ISO 639-1 Code** | **Total Number of Tokens (Billions)** | **Percentage** |
31
+ |--|:--:|:--:|:--:|
32
+ | Vietnamese | vi | 51.4 | 42.13% |
33
+ | Bahasa Indonesia | id | 41.9 | 34.34% |
34
+ | Tamil | ta | 9.3 | 7.62% |
35
+ | Malay | ms | 9.3 | 7.62% |
36
+ | Thai | th | 6.5 | 5.33% |
37
+ | Tagalog | tl | 2.2 | 1.80% |
38
+ | Khmer | km | 0.6 | 0.49% |
39
+ | Lao | lo | 0.6 | 0.49% |
40
+ | Burmese | my | 0.2 | 0.16% |
41
+
42
+ `Please note that we are currently releasing only a portion of the dataset, with plans for future expansions. These expansions will primarily involve adding more tokens for our Southeast Asian languages and incorporating additional languages, including Javanese and Sundanese.`
43
+
44
+ ## Data Pipeline
45
+
46
+ This dataset was created by extracting text from 24 CommonCrawl snapshots, ranging from CC-MAIN-2020-45 to CC-MAIN-2024-18. To ensure uniqueness, we implemented a deduplication process within each snapshot, following the strategy outlined by CCNet. Additionally, we also applied heuristic quality filters and perplexity scoring, drawing on methodologies from Sailor and RedPajamav2. These techniques were developed in collaboration with native speakers to ensure cultural nuances are accurately captured.
47
+
48
+ ## Download
49
+
50
+ To load this data using HuggingFace's `datasets` library, you can use the following code:
51
+ ```python
52
+ from datasets import load_dataset
53
+
54
+ seapilev2 = load_dataset("aisingapore/sea-pile-v2", "<ISO-639-1 code>")
55
+ ```
56
+ For example, if you would like to download the Vietnamese data, you can specify the ISO 639-1 code (`vi`) as shown below:
57
+ ```python
58
+ from datasets import load_dataset
59
+
60
+ seapilev2_vi = load_dataset("aisingapore/sea-pile-v2", "vi")
61
+ ```
62
+ If you wish to download all available datasets at once, you can use the following approach:
63
+ ```python
64
+ languages = ['vi', 'id', 'ta', 'ms', 'th', 'tl', 'km', 'lo', 'my']
65
+
66
+ seapilev2 = {}
67
+ for language in languages:
68
+ seapilev2[language] = load_dataset('aisingapore/sea-pile-v2', language)
69
+ ```
70
+
71
+
72
+ ## Limitations
73
+
74
+ Despite our best efforts to filter out undesirable (i.e NSFW, toxic and biased) content and personally identifiable information (PII), there is a possibility that some documents containing harmful, toxic, or private content may still pass through our pipeline. We are committed to continuously improving our filtering processes to minimize these occurrences.
75
+
76
+ ## License
77
+
78
+ This dataset is made available under [ODC-By 1.0](https://opendatacommons.org/licenses/by/1-0/) license; users should also abide by the [CommonCrawl ToU](https://commoncrawl.org/terms-of-use/).
79
+
80
+ ## Bibtex
81
+
82
+ If you use our dataset, please cite us at:
83
+
84
+ ```bibtex
85
+ @misc{2504.05747,
86
+ Title = {SEA-LION: Southeast Asian Languages in One Network},
87
+ Author = {Raymond Ng and Thanh Ngan Nguyen and Yuli Huang and Ngee Chia Tai and Wai Yi Leong and Wei Qi Leong and Xianbin Yong and Jian Gang Ngui and Yosephine Susanto and Nicholas Cheng and Hamsawardhini Rengarajan and Peerat Limkonchotiwat and Adithya Venkatadri Hulagadri and Kok Wai Teng and Yeo Yeow Tong and Bryan Siow and Wei Yi Teo and Wayne Lau and Choon Meng Tan and Brandon Ong and Zhi Hao Ong and Jann Railey Montalan and Adwin Chan and Sajeban Antonyrex and Ren Lee and Esther Choa and David Ong Tat-Wee and Bing Jie Darius Liu and William Chandra Tjhi and Erik Cambria and Leslie Teo},
88
+ Year = {2025},
89
+ Eprint = {arXiv:2504.05747},
90
+ }
91
+ ```
92
+
93
+ ## References
94
+
95
+ ```bibtex
96
+ @inproceedings{wenzek2020ccnet,
97
+ title={CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data},
98
+ author={Wenzek, Guillaume and Lachaux, Marie-Anne and Conneau, Alexis and Chaudhary, Vishrav and Guzm{\'a}n, Francisco and Joulin, Armand and Grave, {\'E}douard},
99
+ booktitle={Proceedings of The 12th Language Resources and Evaluation Conference},
100
+ pages={4003--4012},
101
+ year={2020}
102
+ }
103
+
104
+ @inproceedings{sailor1report,
105
+ title = "Sailor: Open Language Models for South-{E}ast {A}sia",
106
+ author = "Dou, Longxu and Liu, Qian and Zeng, Guangtao and Guo, Jia and Zhou, Jiahui and Mao, Xin and Jin, Ziqi and Lu, Wei and Lin, Min",
107
+ booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
108
+ year = "2024",
109
+ }
110
+
111
+ @article{weber2024redpajamaopendatasettraining,
112
+ title={RedPajama: an Open Dataset for Training Large Language Models},
113
+ author={Maurice Weber and Daniel Fu and Quentin Anthony and Yonatan Oren and Shane Adams and Anton Alexandrov and Xiaozhong Lyu and Huu Nguyen and Xiaozhe Yao and Virginia Adams and Ben Athiwaratkun and Rahul Chalamala and Kezhen Chen and Max Ryabinin and Tri Dao and Percy Liang and Christopher Ré and Irina Rish and Ce Zhang},
114
+ year={2024},
115
+ eprint={2411.12372},
116
+ archivePrefix={arXiv},
117
+ primaryClass={cs.CL},
118
+ url={https://arxiv.org/abs/2411.12372},
119
+ }
120
+ ```
121
+ ## The Team
122
+ Chan Adwin, Cheng Nicholas, Choa Esther, Huang Yuli, Hulagadri Adithya Venkatadri, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limkonchotiwat Peerat, Liu Bing Jie Darius, Montalan Jann Railey, Ng Boon Cheong Raymond, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Brandon, Ong Tat-Wee David, Ong Zhi Hao, Rengarajan Hamsawardhini, Siow Bryan, Susanto Yosephine, Tai Ngee Chia, Tan Choon Meng, Teng Walter, Teo Eng Sipp Leslie, Teo Wei Yi, Tjhi William, Yeo Yeow Tong, Yong Xianbin
123
+
124
+ **Native speakers**
125
+
126
+ Our special thanks to the native speakers who helped us build the dataset:
127
+
128
+ Wai Yan Paing Andy (Burmese), David Macalintal (Tagalog), Ye Phone Myat (Burmese), Thamudaya Win Berry (Burmese), Sri Sowndarya Elango (Tamil), Sneha Ramakrishnan (Tamil), Chanrichnyneath Kim (Khmer), Nurul Ashikin (Malay), Muhammad Syazwan Bin Adzhar (Malay), Kanruethai Masuk (Lao), Mohamed Jasim (Tamil)
129
+
130
+ ## Acknowledgements
131
+
132
+ AI Singapore is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore.