Datasets:
File size: 6,223 Bytes
e57e5b2 7339fcf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 |
---
license: apache-2.0
task_categories:
- text-retrieval
language:
- en
tags:
- information-retrieval
- reranking
- temporal-evaluation
- benchmark
size_categories:
- 1K<n<10K
pretty_name: Reranking, Retreiver
---
# FutureQueryEval Dataset (EMNLP 2025)🔍
## Dataset Description
**FutureQueryEval** is a novel Information Retrieval (IR) benchmark designed to evaluate reranker performance on temporal novelty. It comprises **148 queries** with **2,938 query-document pairs** across **7 topical categories**, specifically created to test how well reranking models generalize to truly novel queries that were unseen during LLM pretraining.
### Key Features
- **Zero Contamination**: All queries refer to events after April 2025
- **Human Annotated**: Created by 4 expert annotators with quality control
- **Diverse Domains**: Technology, Sports, Politics, Science, Health, Business, Entertainment
- **Real Events**: Based on actual news and developments, not synthetic data
- **Temporal Novelty**: First benchmark designed to test reranker generalization on post-training events
## Dataset Statistics
| Metric | Value |
|--------|-------|
| Total Queries | 148 |
| Total Documents | 2,787 |
| Query-Document Pairs | 2,938 |
| Avg. Relevant Docs per Query | 6.54 |
| Languages | English |
| License | Apache-2.0 |
## Category Distribution
| Category | Queries | Percentage |
|----------|---------|------------|
| **Technology** | 37 | 25.0% |
| **Sports** | 31 | 20.9% |
| **Science & Environment** | 20 | 13.5% |
| **Business & Finance** | 19 | 12.8% |
| **Health & Medicine** | 16 | 10.8% |
| **World News & Politics** | 14 | 9.5% |
| **Entertainment & Culture** | 11 | 7.4% |
## Dataset Structure
The dataset consists of three main files:
### Files
- **`queries.tsv`**: Contains the query information
- Columns: `query_id`, `query_text`, `category`
- **`corpus.tsv`**: Contains the document collection
- Columns: `doc_id`, `title`, `text`, `url`
- **`qrels.txt`**: Contains relevance judgments
- Format: `query_id 0 doc_id relevance_score`
### Data Fields
#### Queries
- `query_id` (string): Unique identifier for each query
- `query_text` (string): The natural language query
- `category` (string): Topical category (Technology, Sports, etc.)
#### Corpus
- `doc_id` (string): Unique identifier for each document
- `title` (string): Document title
- `text` (string): Full document content
- `url` (string): Source URL of the document
#### Relevance Judgments (qrels)
- `query_id` (string): Query identifier
- `iteration` (int): Always 0 (standard TREC format)
- `doc_id` (string): Document identifier
- `relevance` (int): Relevance score (0-3, where 3 is highly relevant)
## Example Queries
**🌍 World News & Politics:**
> "What specific actions has Egypt taken to support injured Palestinians from Gaza, as highlighted during the visit of Presidents El-Sisi and Macron to Al-Arish General Hospital?"
**⚽ Sports:**
> "Which teams qualified for the 2025 UEFA European Championship playoffs in June 2025?"
**💻 Technology:**
> "What are the key features of Apple's new Vision Pro 2 announced at WWDC 2025?"
## Usage
### Loading the Dataset
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("abdoelsayed/FutureQueryEval")
# Access different splits
queries = dataset["queries"]
corpus = dataset["corpus"]
qrels = dataset["qrels"]
# Example: Get first query
print(f"Query: {queries[0]['query_text']}")
print(f"Category: {queries[0]['category']}")
```
### Evaluation Example
```python
import pandas as pd
# Load relevance judgments
qrels_df = pd.read_csv("qrels.txt", sep=" ",
names=["query_id", "iteration", "doc_id", "relevance"])
# Filter for a specific query
query_rels = qrels_df[qrels_df["query_id"] == "FQ001"]
print(f"Relevant documents for query FQ001: {len(query_rels)}")
```
## Methodology
### Data Collection Process
1. **Source Selection**: Major news outlets, official sites, sports organizations
2. **Temporal Filtering**: Events after April 2025 only
3. **Query Creation**: Manual generation by domain experts
4. **Novelty Validation**: Tested against GPT-4 knowledge cutoff
5. **Quality Control**: Multi-annotator review with senior oversight
### Annotation Guidelines
- **Highly Relevant (3)**: Document directly answers the query
- **Relevant (2)**: Document partially addresses the query
- **Marginally Relevant (1)**: Document mentions query topics but lacks detail
- **Not Relevant (0)**: Document does not address the query
## Research Applications
This dataset is designed for:
- **Reranker Evaluation**: Testing generalization to novel content
- **Temporal IR Research**: Understanding time-sensitive retrieval challenges
- **Domain Robustness**: Evaluating cross-domain performance
- **Contamination Studies**: Clean evaluation on post-training data
## Benchmark Results
Top performing methods on FutureQueryEval:
| Method | Type | NDCG@10 | Runtime (s) |
|--------|------|---------|-------------|
| Zephyr-7B | Listwise | **62.65** | 1,240 |
| MonoT5-3B | Pointwise | **60.75** | 486 |
| Flan-T5-XL | Setwise | **56.57** | 892 |
## Dataset Updates
FutureQueryEval will be updated every 6 months with new queries about recent events to maintain temporal novelty:
- **Version 1.1** (December 2025): +100 queries from July-September 2025
- **Version 1.2** (June 2026): +100 queries from October 2025-March 2026
## Citation
If you use FutureQueryEval in your research, please cite:
```bibtex
@misc{abdallah2025good,
title={How Good are LLM-based Rerankers? An Empirical Analysis of State-of-the-Art Reranking Models},
author={Abdelrahman Abdallah and Bhawna Piryani and Jamshid Mozafari and Mohammed Ali and Adam Jatowt},
year={2025},
eprint={2508.16757},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contact
- **Authors**: Abdelrahman Abdallah, Bhawna Piryani
- **Institution**: University of Innsbruck
- **Paper**: [arXiv:2508.16757](https://arxiv.org/abs/2508.16757)
- **Code**: [GitHub Repository](https://github.com/DataScienceUIBK/llm-reranking-generalization-study)
## License
This dataset is released under the Apache-2.0 License. |