hplt2_edu_scores / README.md
mfromm's picture
Update README.md
6cb71d1 verified
metadata
task_categories:
  - text-ranking
pretty_name: HPLT2-JQL-Education
size_categories:
  - n>1T
language:
  - sq
  - bg
  - ca
  - cs
  - da
  - de
  - es
  - et
  - el
  - eu
  - fi
  - fr
  - gl
  - ga
  - hr
  - hu
  - hy
  - is
  - it
  - lv
  - lt
  - mk
  - nl
  - pl
  - pt
  - ro
  - sl
  - sk
  - sr
  - tr
  - sv
  - nb
  - nn
configs:
  - config_name: als_Latn
    data_files:
      - split: train
        path: als_Latn/*
  - config_name: bul_Cyrl
    data_files:
      - split: train
        path: bul_Cyrl/*
  - config_name: cat_Latn
    data_files:
      - split: train
        path: cat_Latn/*
  - config_name: ces_Latn
    data_files:
      - split: train
        path: ces_Latn/*
  - config_name: dan_Latn
    data_files:
      - split: train
        path: dan_Latn/*
  - config_name: deu_Latn
    data_files:
      - split: train
        path: deu_Latn/*
  - config_name: est_Latn
    data_files:
      - split: train
        path: est_Latn/*
  - config_name: ell_Grek
    data_files:
      - split: train
        path: ell_Grek/*
  - config_name: eus_Latn
    data_files:
      - split: train
        path: eus_Latn/*
  - config_name: fin_Latn
    data_files:
      - split: train
        path: fin_Latn/*
  - config_name: fra_Latn
    data_files:
      - split: train
        path: fra_Latn/*
  - config_name: gle_Latn
    data_files:
      - split: train
        path: gle_Latn/*
  - config_name: glg_Latn
    data_files:
      - split: train
        path: glg_Latn/*
  - config_name: hrv_Latn
    data_files:
      - split: train
        path: hrv_Latn/*
  - config_name: hun_Latn
    data_files:
      - split: train
        path: hun_Latn/*
  - config_name: hye_Armn
    data_files:
      - split: train
        path: hye_Armn/*
  - config_name: isl_Latn
    data_files:
      - split: train
        path: isl_Latn/*
  - config_name: ita_Latn
    data_files:
      - split: train
        path: ita_Latn/*
  - config_name: lit_Latn
    data_files:
      - split: train
        path: lit_Latn/*
  - config_name: lvs_Latn
    data_files:
      - split: train
        path: lvs_Latn/*
  - config_name: mkd_Cyrl
    data_files:
      - split: train
        path: mkd_Cyrl/*
  - config_name: nld_Latn
    data_files:
      - split: train
        path: nld_Latn/*
  - config_name: nno_Latn
    data_files:
      - split: train
        path: nno_Latn/*
  - config_name: nob_Latn
    data_files:
      - split: train
        path: nob_Latn/*
  - config_name: pol_Latn
    data_files:
      - split: train
        path: pol_Latn/*
  - config_name: por_Latn
    data_files:
      - split: train
        path: por_Latn/*
  - config_name: ron_Latn
    data_files:
      - split: train
        path: ron_Latn/*
  - config_name: slk_Latn
    data_files:
      - split: train
        path: slk_Latn/*
  - config_name: slv_Latn
    data_files:
      - split: train
        path: slv_Latn/*
  - config_name: spa_Latn
    data_files:
      - split: train
        path: spa_Latn/*
  - config_name: srp_Cyrl
    data_files:
      - split: train
        path: srp_Cyrl/*
  - config_name: swe_Latn
    data_files:
      - split: train
        path: swe_Latn/*
  - config_name: tur_Latn
    data_files:
      - split: train
        path: tur_Latn/*
  - config_name: ukr_Cyrl
    data_files:
      - split: train
        path: ukr_Cyrl/*

HPLT2-Edu-scores

Dataset summary

HPLT2-JQL-Education is a model-annotated language subset of HPLT2, spanning 35 languages. Our model-annotations allow for a filtering that achieves higher-quality training outcomes without excessively aggressive data reduction. The original FW2 heuristic filtering method serves as our baseline, providing reference points for both the volume of retained tokens and downstream model performance. For example, in the Spanish language case, applying the 0.6 threshold retains over 9% more tokens than FW2 train while still surpassing its quality .

HPLT2-Edu-scores was created based on scores assigned by a deep learning classifier trained to identify educational samples using Snowflake's Arctic-embed-m-v2.0 embeddings.

For all training ablations, we used dense decoder-only models with 2 billion parameters, following the LLaMA architecture. For more details, see our paper https://arxiv.org/abs/2505.22232.

The approach as described in the paper is easy to extend to other languages as well, and we might consider adding new languages to an upcoming version of the present dataset.

We also separately release the computed general-purpose embedding vectors for the the full sets of the original HPLT2 dataset, in the respective languages, as they can be useful for other applications beyond quality filtering: HPLT2-embeddings.

Dataset Structure

Data Fields

Each data entry includes:

  • score_Gemma_Snowflake: Quality score obtained by the Gemma-based Snowflake classifier
  • score_Llama_Snowflake: Quality score obtained by the Llama-based Snowflake classifier
  • score_Mistral_Snowflake: Quality score obtained by the Mistral-based Snowflake classifier
  • source_filename: Source filename of the original file.

Data Instance

{
  "id": "0",
  "file_path": "/leonardo_scratch/large/userexternal/mfromm00/data/raw_data/HPLT2/output/embeddings/als_Latn/als_Latn/000_000_00000.jsonl.h5",
  "document_id": "29d82196d55803ab9c792e45b59919bf_0",
  "source_filename": "als_Latn/als_Latn/000_000_00000.jsonl.h5",
  "score_Gemma_Snowflake": 0.330078125,
  "score_Llama_Snowflake": -0.34765625,
  "score_Mistral_Snowflake": -0.390625
}

Origin of the Dataset

This dataset, derived from HPLT2, includes web content collected from 2012 to 2023. As HPLT2 is sourced from the broader internet, it may contain some personally identifiable information (PII), despite efforts to anonymize email addresses and public IP addresses during processing.

Considerations for Data Usage

For information on social impact, potential biases, and known limitations, please refer to the HPLT2 documentation.

Citation information

If you use this dataset in your research or applications, please use the following citation:

@article{ali2025judging,
    title     = {Judging Quality Across Languages: A Multilingual Approach to Pretraining Data Filtering with Language Models},
    author    = {
      Mehdi Ali,
      Manuel Brack,
      Max Lübbering,
      Elias Wendt,
      Abbas Goher Khan,
      Richard Rutmann,
      Alex Jude,
      Maurice Kraus,
      Alexander Arno Weber,
      Felix Stollenwerk,
      David Kaczér,
      Florian Mai,
      Lucie Flek,
      Rafet Sifa,
      Nicolas Flores-Herr,
      Joachim Köhler,
      Patrick Schramowski,
      Michael Fromm,
      Kristian Kersting
    },
    year      = {2025},
    journal   = {arXiv preprint arXiv:2505:22232}
  }