--- dataset_info: features: - name: infobox dtype: string - name: categories sequence: string - name: text dtype: string - name: id dtype: int64 - name: token_count dtype: int64 - name: title dtype: string - name: url dtype: string - name: revdate dtype: timestamp[s] - name: entity dtype: string splits: - name: train num_bytes: 30000378779 num_examples: 4792380 download_size: 16061898218 dataset_size: 30000378779 configs: - config_name: default data_files: - split: train path: data/train-* language: - en --- # Dataset Card for Lingua Custodia/Clean-Wikipedia-English-Articles Lingua Custodia is delighted to announce the release of the cleanest markdown extract of Wikipedia articles so far, a high-quality resource to train your LLMs! ## Dataset Summary The Clean-Wikipedia-English-Articles dataset contains the comprehensive bodies of English articles, i.e. without appendices like References, See also, Bibliography, etc. It has been pointed out [here](https://huggingface.co/datasets/wikimedia/wikipedia/discussions/51) that the Wikimedia Wikipedia dataset was missing parts of sentences and one of the suggested solutions was to extract articles from [Wikipedia HTML dumps](https://dumps.wikimedia.org/other/enterprise_html/runs/) thanks to the [mwparserfromhtml library](https://gitlab.wikimedia.org/repos/research/html-dumps/-/tree/main/src/mwparserfromhtml?ref_type=heads) as shown in this [Notebook](https://public-paws.wmcloud.org/55703823/HTML-dumps/wikipedia-plaintext-extraction.ipynb). This was a major improvement but some other parts were missing like titles, lists, tables, LateX formulas, superscripts and subscripts. Some modules of the mwparserfromhtml library were modified and new code was written in order to remedy this issue, as well as reading the HTML dump backwards to remove duplicates by keeping the most recent revision of each article. Only article bodies containing at least 2 sections and 50 tokens were retrieved in order to exclude empty redirection pages and short drafts. The dataset was built from the `enwiki-NS0-20250220-ENTERPRISE-HTML.json.tar.gz` HTML dump released on the 20th February 2025, contains a single train split and consists of **7B tokens**. ## Dataset Structure ### Data Instances Here is an example: ```plaintext { 'infobox': '', 'categories': [ "Articles with short description", "Short description is different from Wikidata", ...], 'text': '# Open-source artificial intelligence **Open-source artificial intelligence** is an AI system...' 'id': 74399351, 'token_count': 5278, 'url': 'https://en.wikipedia.org/wiki/Open-source_artificial_intelligence', 'title': 'Open-source artificial intelligence', 'revdate': 2025-02-19T15:38:11, 'entity': 'Q120785614' } ``` ### Data Fields Each sample in the dataset includes the following fields: - **`infobox (str)`**: Markdown formatted text content of the infobox (empty string if no infobox in article). - **`categories (list(str))`**: List of categories linked to the article. - **`text (str)`**: Markdown formatted text content of the article without appendices. - **`id (int)`**: ID of the article. - **`token_count (int)`**: Number of tokens contained in the text field computed using Qwen/Qwen2.5-7B-Instruct tokenizer. - **`title (str)`**: Title of the article. - **`url (str)`**: URL of the article. - **`revdate (datetime)`**: Revision date of the article. - **`entity (str)`**: Wikidata QID linked to the article. ## License Copyright licensing information: https://dumps.wikimedia.org/legal.html All original textual content is licensed under the GNU Free Documentation License (GFDL) and the Creative Commons Attribution-Share-Alike 3.0 License. Some text may be available only under the Creative Commons license; see their Terms of Use for details. Text written by some authors may be released under additional licenses or into the public domain. ## Citation If you use this dataset in your research or projects, please cite it appropriately. ``` @misc{Clean-Wikipedia-English-Articles, title={Clean-Wikipedia-English-Articles}, author={Foly, Sabine and Liu, Jingshu and Barthelemy, Jean-Gabriel and Caillaut, Gaƫtan and Qader, Raheel and Nakhle, Mariam and Sadoune, Arezki}, url={https://huggingface.co/datasets/LinguaCustodia/Clean-Wikipedia-English-Articles}, year={2025} } ```