LingOly-TOO / README.md
jkhouja's picture
Update README.md
cd54d3d verified
metadata
license: cc-by-nc-nd-4.0
task_categories:
  - question-answering
tags:
  - reasoning
  - linguistics
  - benchmark
pretty_name: L2
size_categories:
  - 1K<n<10K
source_datasets:
  - https://huggingface.co/datasets/ambean/lingOly
configs:
  - config_name: default
    data_files:
      - split: test
        path: test_small.zip
extra_gated_prompt: >-
  ### LingOly-TOO LICENSE AGREEMENT

  The LingOly-TOO dataset is distributed under a CC-BY-NC-ND 4.0 license.

  All questions in the LingOly-TOO dataset have been used with the permission of
  the original authors. The original authors and the United Kingdom Linguistics
  Olympiad may retain rights to control the use, and users of this dataset will
  assume liability if they use the dataset beyond the terms of use as indicated
  by the benchmark.

  The authors do not take responsibility for any licenses that change with time.

  In addition to this license, we ask that uses of the dataset are in line with
  the Acceptable Use policy described below.

  ### Acceptable Use Policy

  This dataset is exclusively intended as a benchmark for evaluating language 
  models subject to the terms of the license. For the integrity of the
  benchmark, users should not:
      * Re-distribute the questions or answers of the benchmark in formats 
      (such as plain text) which leak the benchmark to web-scraping.
      * Train language models directly using the content of this benchmark.
extra_gated_fields:
  By clicking Submit below I accept the terms of the license and Acceptable Use policy: checkbox
extra_gated_button_content: Submit

LingOly-TOO (L2)

alt text

Links

Summary

LingOly-TOO (L2) is a challenging linguistics reasoning benchmark designed to counteracts answering without reasoning (e.g. by guessing or memorizing answers).

Dataset format

LingOly-TOO benchmark was created by generating up to 6 obfuscations per problem for 82 problems source from original LingOly benchmark. Dataset contains over 1200 question answer pairs. Some answers consists of multiple parts.

{'question_n':            # The question number in the problem
 'prompt':                # The main text of the question including preamble, context and previous questions
 'completion':            # The correct answer
 'question':              # The question text only (without the the rest of the prompt)
 'context':               # Context text that includes important information, you should prepend your prompt with context for solvability
 'obfuscated':            # If this example was obfuscated or not
 'overall_question_n':    # The problem number
 'obfuscated_question_n': # Concatenation of problem number and obfuscation number
}

Citation

@article{khouja2025lingolytoodisentanglingmemorisationreasoning,
      title={LINGOLY-TOO: Disentangling Memorisation from Reasoning with Linguistic Templatisation and Orthographic Obfuscation}, 
      author={Jude Khouja and Karolina Korgul and Simi Hellsten and Lingyi Yang and Vlad Neacsu and Harry Mayne and Ryan Kearns and Andrew Bean and Adam Mahdi},
      year={2025},
      eprint={2503.02972},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2503.02972}, 
}