license: cc0-1.0
task_categories:
- question-answering
language:
- en
size_categories:
- 100M<n<1B
ComplexTempQA Dataset
ComplexTempQA is a large-scale dataset designed for complex temporal question answering (TQA). It consists of over 100 million question-answer pairs, making it one of the most extensive datasets available for TQA. The dataset is generated using data from Wikipedia and Wikidata and spans questions over a period of 36 years (1987-2023).
Note: We have a smaller version consisting of questions from the time period 1987 until 2007.
Dataset Description
ComplexTempQA categorizes questions into three main types:
- Attribute Questions
- Comparison Questions
- Counting Questions
These categories are further divided based on their relation to events, entities, or time periods.
Question Types and Counts
Question Type | Subtype | Count | |
---|---|---|---|
1a | Attribute | Event | 83,798 |
1b | Attribute | Entity | 84,079 |
1c | Attribute | Time | 9,454 |
2a | Comparison | Event | 25,353,340 |
2b | Comparison | Entity | 74,678,117 |
2c | Comparison | Time | 54,022,952 |
3a | Counting | Event | 18,325 |
3b | Counting | Entity | 10,798 |
3c | Counting | Time | 12,732 |
Multi-Hop | 76,933 | ||
Unnamed Event | 8,707,123 | ||
Total | 100,228,457 |
Metadata
- id: A unique identifier for each question.
- question: The text of the question being asked.
- answer: The answer(s) to the question.
- type: The type of question based on the dataset’s taxonomy.
- rating: A numerical rating indicating the difficulty of the question (
0
for easy,1
for hard). - timeframe: The start and end dates relevant to the question.
- question_entity: List of Wikidata IDs related to the entities in the question.
- answer_entity: List of Wikidata IDs related to the entities in the answer.
- question_country: List of Wikidata IDs of the countries associated with the questioned entities or events.
- answer_country: List of Wikidata IDs of the countries associated with the answered entities or events.
- is_unnamed: A flag indicating if the question contains an implicitly described event (
1
for yes,0
for no).
Dataset Characteristics
Size
ComplexTempQA comprises over 100 million question-answer pairs, focusing on events, entities, and time periods from 1987 to 2023.
Complexity
Questions require advanced reasoning skills, including multi-hop question answering, temporal aggregation, and across-time comparisons.
Taxonomy
The dataset follows a unique taxonomy categorizing questions into attributes, comparisons, and counting types, ensuring comprehensive coverage of temporal queries.
Evaluation
The dataset has been evaluated for readability, ease of answering before and after web searches, and overall clarity. Human raters have assessed a sample of questions to ensure high quality.
Usage
Evaluation and Training
ComplexTempQA can be used for:
- Evaluating the temporal reasoning capabilities of large language models (LLMs)
- Fine-tuning language models for better temporal understanding
- Developing and testing retrieval-augmented generation (RAG) systems
Research Applications
The dataset supports research in:
- Temporal question answering
- Information retrieval
- Language understanding
Adaptation and Continual Learning
ComplexTempQA's temporal metadata facilitates the development of online adaptation and continual training approaches for LLMs, aiding in the exploration of time-based learning and evaluation.
Access
The dataset and code are freely available at https://github.com/DataScienceUIBK/ComplexTempQA.