Datasets:
license: mit
language:
- en
size_categories:
- 10K<n<100K
PRobELM Dataset (v1)
PRobELM (Plausibility Ranking Evaluation for Language Models) is a benchmark dataset for evaluating language models on their ability to distinguish more plausible statements from less plausible alternatives using their parametric world knowledge. It is designed to support research in plausibility estimation, world knowledge modeling, and plausibility-driven ranking tasks.
This is the first release of the PRobELM dataset. It includes raw subject-relation-object triples, automatically derived from structured Wikidata edit histories, grouped into sets of plausible and less plausible candidates. The dataset is intended to serve as the foundation for prompt-based evaluations introduced in the PRobELM paper. Prompt formats and task templates are described in detail in the accompanying publication.
Overview
PRobELM is designed to assess the ability of language models to leverage their parametric knowledge to rank competing hypotheses by plausibility. In contrast to truthfulness benchmarks such as TruthfulQA or commonsense inference datasets like COPA, PRobELM emphasizes plausibility ranking grounded in world knowledge — bridging the gap between factual correctness and likelihood-based reasoning.
This benchmark is particularly relevant for downstream applications such as literature-based discovery, where identifying credible but unverified knowledge is more valuable than confirming known facts.
Dataset Design and Scope
The dataset is derived from changes in Wikidata over time, isolating cases where new knowledge was added to an entity, and generating contrasting less plausible alternatives. Each example consists of a set of subject-relation-object triples annotated with a binary label that indicates which entry is the more plausible one, given the historical context.
All entries in this release are raw triples with no additional prompting applied. The dataset is grouped such that each set of candidates (sharing the same id
) forms a minimal contrast set for plausibility evaluation.
The dataset is split into train
, dev
, and test
sets to support model development, hyperparameter tuning, and final evaluation.
Data Format
Each file (train.json
, dev.json
, and test.json
) is a list of JSON objects with the following fields:
- id: A unique identifier shared by all items in a plausibility comparison set.
- subject: The entity or topic of the triple.
- relation: The property or predicate describing the relationship.
- object: The value or target of the relation.
- label: A binary indicator (
1
for more plausible,0
for less plausible). - rank: Ordinal position within the set, with
1
typically indicating the most plausible candidate.
All entries are presented as raw structured knowledge triples. Prompt formats and evaluation templates for different task types (e.g., completion, classification, QA) are provided in the accompanying paper.
Intended Use
The PRobELM dataset is designed for academic research and model evaluation. It supports a range of use cases including:
- Evaluating language models’ ability to assess plausibility using world knowledge
- Comparing plausibility performance across model scales, training recency, and architectures
- Supporting prompt-based probing, ranking tasks, or discriminative training
This dataset is not intended for operational deployment or real-time decision-making. It is designed for controlled evaluation under academic research settings.
Citation
If you use this dataset in your work, please cite the following paper:
@article{yuan2024probelm,
title={PRobELM: Plausibility ranking evaluation for language models},
author={Yuan, Zhangdie and Chamoun, Eric and Aly, Rami and Whitehouse, Chenxi and Vlachos, Andreas},
journal={arXiv preprint arXiv:2404.03818},
year={2024}
}