license: mit
language:
- en
size_categories:
- 10K<n<100K
Overview
The dataset contains all necessary pre-processed datasets used in our paper, including Varifocal Generation and Varifocal Reranking. We provide these for reproducibility purposes. Please note that the original raw data may not be available in this repository due to anonymization and licensing constraints.
Motivation
Fact-checking often requires asking clarifying questions to verify a claim's veracity. This project aims to generate fact-checking questions that are not only fluent and clear, but also relevant and informative with respect to the input claims. We propose a varifocal question generation approach to select the most useful questions for downstream fact-checking.
Scope
We develop a two-stage pipeline:
(1) a question generation model that produces diverse candidate questions conditioned on the claim, and
(2) a reranking model that selects the most suitable questions based on predefined criteria such as informativeness and relevance.
The datasets provided here support these experiments and analyses.
Splits:
Varifocal Generation:
Varifocal Reranking:
Future Vision
This work serves as a foundation for future research into building interactive fact-checking systems that can ask clarifying questions dynamically. We hope our findings inspire the development of more robust, transparent, and proactive AI systems that assist human fact-checkers more effectively.
Ethical Considerations
While our datasets are designed to support fact-checking and counter misinformation, fact-checking systems may still propagate biases present in the training data or models. Additionally, generated questions could inadvertently reinforce incorrect assumptions if not carefully evaluated. We encourage responsible use of these resources and further research on bias mitigation in question generation and verification.
Citation
If you use this dataset in your work, please cite the following paper:
@article{ousidhoum2022varifocal,
title={Varifocal question generation for fact-checking},
author={Ousidhoum, Nedjma and Yuan, Zhangdie and Vlachos, Andreas},
journal={arXiv preprint arXiv:2210.12400},
year={2022}
}