wikipedia-markdown / README.md
krypticmouse's picture
Update README.md
4c6c554 verified
metadata
license: cc-by-sa-4.0
task_categories:
  - text-generation
language:
  - en

Marin Markdownified Wikipedia

Markdownified Wikipedia is a large-scale, pre-processed version of the English Wikipedia Enterprise HTML dump consisting of 8.59B tokens. The corpus has been converted to clean, section-aware Markdown for language-model training.

Value
Tokens 8 587 224 558
Primary source https://dumps.wikimedia.org/other/enterprise_html/runs/20241201/enwiki-NS0-20241201-ENTERPRISE-HTML.json.tar.gz
File format JSONL
License CC-BY-SA 4.0 (mirrors upstream Wikipedia licenses)

Processing and Cleaning Pipeline

Our conversion pipeline combines several sophisticated techniques to transform raw Wikipedia HTML into high-quality Markdown:

  1. HTML Preprocessing: We start with the Enterprise HTML dump in Extended DOLMA format, which provides HTML representations of Wikipedia articles with metadata.

  2. Structural Cleanup

    • Mathematical equations are converted from MathML to LaTeX notation with appropriate delimiters
    • Infoboxes are relocated to a dedicated section at the end of each article
    • Reference sections and citations are removed to reduce noise and focus on the informative content
  3. DOM Simplification: We employ a custom-enhanced version of Resiliparse that preserves semantic HTML structure. Rather than flattening to plain text, we retain important elements like headings, paragraphs, lists while removing scripts, tracking code, and boilerplate.

  4. Markdown Conversion: Our custom Markdownify implementation transforms the simplified DOM into clean Markdown with these characteristics:

    • Consistent heading format using the ATX style (# Heading)
    • Removal of Wikipedia-specific navigation elements and edit buttons
    • Preservation of tables in GitHub-flavored Markdown
    • Standardized list formatting

    The final output stores each article as a JSON object containing the Markdown text and essential metadata (ID, title, URL, creation date, and optional abstract).

  5. Quality Filtering: Articles are discarded when they match any of these criteria:

    • More than 50% digits
    • Fewer than 70 words
    • More than 50% special characters These filters were applied to remove statistical tables, list-only pages, and navigation stubs.

Usage Example

from datasets import load_dataset

ds = load_dataset(
    "marin-community/wikipedia-markdown",
    split="train",
    streaming=True
)

for article in ds.take(3):
    print(article["text"])

Citation

If you use this dataset in your research, please cite both the original Wikipedia contributors and our work:

@misc{markdownified_wiki_2024,
  title        = {Marin Markdownified Wikipedia},
  author       = {The Marin Community},
  year         = {2024},
  url          = {https://huggingface.co/datasets/marin-community/wikipedia-markdown}
}

License

All content inherits Wikipedia's licensing: CC-BY-SA 4.0. Our conversion tools and pipeline are released under Apache 2.0.

Acknowledgement

We extend our gratitude to: