--- license: apache-2.0 task_categories: - text-retrieval language: - en tags: - information-retrieval - reranking - temporal-evaluation - benchmark size_categories: - 1K "What specific actions has Egypt taken to support injured Palestinians from Gaza, as highlighted during the visit of Presidents El-Sisi and Macron to Al-Arish General Hospital?" **⚽ Sports:** > "Which teams qualified for the 2025 UEFA European Championship playoffs in June 2025?" **💻 Technology:** > "What are the key features of Apple's new Vision Pro 2 announced at WWDC 2025?" ## Usage ### Loading the Dataset ```python from datasets import load_dataset # Load the dataset dataset = load_dataset("abdoelsayed/FutureQueryEval") # Access different splits queries = dataset["queries"] corpus = dataset["corpus"] qrels = dataset["qrels"] # Example: Get first query print(f"Query: {queries[0]['query_text']}") print(f"Category: {queries[0]['category']}") ``` ### Evaluation Example ```python import pandas as pd # Load relevance judgments qrels_df = pd.read_csv("qrels.txt", sep=" ", names=["query_id", "iteration", "doc_id", "relevance"]) # Filter for a specific query query_rels = qrels_df[qrels_df["query_id"] == "FQ001"] print(f"Relevant documents for query FQ001: {len(query_rels)}") ``` ## Methodology ### Data Collection Process 1. **Source Selection**: Major news outlets, official sites, sports organizations 2. **Temporal Filtering**: Events after April 2025 only 3. **Query Creation**: Manual generation by domain experts 4. **Novelty Validation**: Tested against GPT-4 knowledge cutoff 5. **Quality Control**: Multi-annotator review with senior oversight ### Annotation Guidelines - **Highly Relevant (3)**: Document directly answers the query - **Relevant (2)**: Document partially addresses the query - **Marginally Relevant (1)**: Document mentions query topics but lacks detail - **Not Relevant (0)**: Document does not address the query ## Research Applications This dataset is designed for: - **Reranker Evaluation**: Testing generalization to novel content - **Temporal IR Research**: Understanding time-sensitive retrieval challenges - **Domain Robustness**: Evaluating cross-domain performance - **Contamination Studies**: Clean evaluation on post-training data ## Benchmark Results Top performing methods on FutureQueryEval: | Method | Type | NDCG@10 | Runtime (s) | |--------|------|---------|-------------| | Zephyr-7B | Listwise | **62.65** | 1,240 | | MonoT5-3B | Pointwise | **60.75** | 486 | | Flan-T5-XL | Setwise | **56.57** | 892 | ## Dataset Updates FutureQueryEval will be updated every 6 months with new queries about recent events to maintain temporal novelty: - **Version 1.1** (December 2025): +100 queries from July-September 2025 - **Version 1.2** (June 2026): +100 queries from October 2025-March 2026 ## Citation If you use FutureQueryEval in your research, please cite: ```bibtex @misc{abdallah2025good, title={How Good are LLM-based Rerankers? An Empirical Analysis of State-of-the-Art Reranking Models}, author={Abdelrahman Abdallah and Bhawna Piryani and Jamshid Mozafari and Mohammed Ali and Adam Jatowt}, year={2025}, eprint={2508.16757}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Contact - **Authors**: Abdelrahman Abdallah, Bhawna Piryani - **Institution**: University of Innsbruck - **Paper**: [arXiv:2508.16757](https://arxiv.org/abs/2508.16757) - **Code**: [GitHub Repository](https://github.com/DataScienceUIBK/llm-reranking-generalization-study) ## License This dataset is released under the Apache-2.0 License.