Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet
category
stringclasses
8 values
key
int64
4
99
question
stringclasses
10 values
ground_truths
stringclasses
10 values
misc
stringclasses
5 values
scifacts-geo
55
Find the paper(s) that use all of the datasets mentioned below: American Community Survey, Flooding Data from FEMA: Harvey Flood Depths Grid, Food Access Research Atlas by USDA, Streetlight Data, POI data by SafeGraph Instructions: 1. You must respond with a list JSON object(s) in this format: { "paper_title": "<full paper title>", } 2. You must provide a valid json file with empty fields even if you don't have the answer.
[[{"paper_title": ["Characterizing Equitable Access to Grocery Stores During Disasters Using Location-based Data"]}]]
{"sources": [], "eval_info": {}}
scifacts-materials
76
Find the material(s) that satisfy every one of the listed measured properties. Instructions 1. In your response, provide a list of JSON object(s) in the following format: [ { "material": "<material name>", "inference_basis": "<brief explanation of how the listed properties match this material>", "paper_title": "<title of the paper>", "property_source_table": "<source table from which material name, property and property descriptor were identified>", "property_source_passage": "<source passage from which material name, property and property descriptor were identified>" } ] 2. Your inference_basis should reference the key properties (e.g., band gap, absorption coefficient, crystal structure) that led you to that material. 3. You must provide a valid json file even if you don't have the answer. Measured properties: - Direct band gap: 3.37 eV - Exciton binding energy: 60 meV - Lattice fringe: 0.52 nm - contact angle CA water contract angle (Dataphysics OCA20 contact angle system Ambient temperature): 121 °
[[{"paper_title": ["Hydrothermally Grown ZnO Micro/Nanotube Arrays and Their Properties"], "material": ["zinc oxide"]}]]
{"sources": [], "eval_info": {}}
entities
10
Provide a comprehensive list of all animated films (excluding anime) that meet the following criteria: 1. The film must fall under the category of animation but must not be classified as anime. 2. The film must have a Tomatometer rating of 95% or higher on Rotten Tomatoes. 3. The film’s runtime must be between 80 and 85 minutes inclusive. Present the final result as a JSON array of strings, where each string is the title of a qualifying film.
[["Beauty and the Beast (1991)", "Shaun the Sheep Movie", "Snow White and the Seven Dwarfs", "A Boy Named Charlie Brown", "The Big Bad Fox and Other Tales...", "The King and the Mockingbird", "Batman: The Long Halloween, Part One", "Steven Universe: The Movie", "Predator: Killer of Killers", "Flow (2024)", "Marvel Rising: Secret Warriors", "Yellow Submarine", "Even Mice Belong in Heaven", "The Witcher: Nightmare of the Wolf", "Batman and Superman: Battle of the Super Sons", "Long Way North", "Metalocalypse: Army of the Doomstar", "Wallace & Gromit: The Curse of the Were-Rabbit", "Chicken Run", "Bu\u00f1uel in the Labyrinth of the Turtles", "Sita Sings the Blues", "Toy Story", "Ernest & Celestine", "I Lost My Body"]]
{"sources": [], "eval_info": {}}
entities
4
Provide a comprehensive list of all people that meet the following criteria: 1. He/She should have won atleast 3 IMO medals for Korea. 2. Atleast one of these medals should have been won in an year when Korea was ranked a minimum of 5(<=5) during IMO. Present the final result as a JSON array of strings, where each string is the name of the person.
[["Youngbeom Jin", "Woojin Choi", "Gyudong Lee", "Sehun Kim", "Jimin Kim", "Dong Ryul Kim", "Whan Ghang", "Junhwi Bae", "Yuchan Jung"]]
{"sources": [], "eval_info": {}}
novel-datasets-identification
31
Identify a publicly available dataset of long-form in-the-wild videos that are segmented into scenes and richly annotated with both content descriptions and user engagement signals. The dataset should include multimodal information and support analysis at the video level as well as the scene level. Once you've found a dataset paper, provide its information in the following JSON format: ```json { "title": <title of the dataset paper>, "is_published": <true/false>, "venue": <venue>, // eg: CVPR "year": <year of publication>, "dataset_url": <link to the dataset>, // if available, else null } ```
[{"title": "Large Content and Behavior Models to Understand, Simulate, and Optimize Content and Behavior", "is_published": true, "venue": "ICLR", "year": 2024, "dataset_url": "https://huggingface.co/datasets/behavior-in-the-wild/content-behavior-corpus"}]
{"sources": ["https://arxiv.org/pdf/2309.00359"], "eval_info": {"main_claims": ["title"]}}
novel-datasets-identi-extraction
23
I'm looking for a GPT-4 generated corpus of decisions rooted in quotidian life - think commuting, family squabbles or career decisions - each tagged against broad socio-psychological dimensions. I'm specifically interested in scenarios where the resolution hinges not on objective correctness, but instead on personal values. How does this corpus reveal GPT-4’s implicit generation bias across the various value dimensions in each of the socio-psychological frameworks explored in the corpus? Provide the an overview of GPT-4's generation bias in the following json format: ```json [ { "framework": <framework_name>, "most_biased_dimension": <most_biased_dimension_name> }, ... ] ```
[[{"framework": "Moral Foundations Theory", "most_biased_dimension": "Fairness"}, {"framework": "World Values Survey", "most_biased_dimension": "Self-Expression"}, {"framework": "Maslow's Hierarchy of Needs", "most_biased_dimension": "Self-Esteem"}, {"framework": "Plutchik's Wheel of Emotions", "most_biased_dimension": "Trust"}, {"framework": "Aristotle's Virtues", "most_biased_dimension": "Truthfulness"}]]
{"sources": ["https://openreview.net/pdf?id=PGhiPGBf47"], "eval_info": {"primary_keys": ["framework"], "main_claims": ["framework"]}}
novel-datasets-peer
38
I want to compare existing 3D urban segmentation datasets based on real-world scenes that go beyond static 2D maps or flat segmentation overlays, and instead represent urban environments as richly annotated point clouds. How do they compare in terms of data acquisition methods—such as mobile laser scanning, terrestrial laser scanning, aerial laser scanning, and photogrammetry? For the datasets you find, provide the following information in json format: ```json [ { "name": <name of the resource>, "data_aquisition_method": <data acquisition method>, "area": <area of the dataset>, "scenes": <number of scenes>, "points_million": <number of points>, }, ... ] ```
[[{"name": "Okland", "data_aquisition_method": "MLS", "area": "1.5 km", "scenes": 1, "points_million": 1.6}, {"name": "Semantic3D", "data_aquisition_method": "TLS", "area": "-", "scenes": 3, "points_million": 4000}, {"name": "Paris-Lille-3D", "data_aquisition_method": "MLS", "area": "1.94 km", "scenes": 2, "points_million": 143}, {"name": "DublinCity", "data_aquisition_method": "ALS", "area": "2 km\u00b2", "scenes": 1, "points_million": 260}, {"name": "SemanticKITTI", "data_aquisition_method": "MLS", "area": "39.2 km", "scenes": 1, "points_million": 4549}, {"name": "Toronto-3D", "data_aquisition_method": "MLS", "area": "1.0 km", "scenes": 1, "points_million": 78.3}, {"name": "DALES", "data_aquisition_method": "ALS", "area": "10 km\u00b2", "scenes": 1, "points_million": 505.3}, {"name": "Campus3D", "data_aquisition_method": "UAV Ptgy", "area": "1.58 km\u00b2", "scenes": 1, "points_million": 937.1}, {"name": "Hessigheim 3D", "data_aquisition_method": "UAV LiDAR/Camera", "area": "0.19 km\u00b2", "scenes": 1, "points_million": 125.7}, {"name": "SUM", "data_aquisition_method": "Airplane Camera", "area": "4 km\u00b2", "scenes": 1, "points_million": 19}, {"name": "Swiss3DCities", "data_aquisition_method": "UAV Ptgy", "area": "2.7 km\u00b2", "scenes": 3, "points_million": 226}, {"name": "SensatUrban", "data_aquisition_method": "UAV Ptgy", "area": "7.6 km\u00b2", "scenes": 3, "points_million": 2847}, {"name": "STPLS3D", "data_aquisition_method": "UAV Ptgy", "area": "1.27 km\u00b2", "scenes": 1, "points_million": 150.4}, {"name": "InstanceBuilding", "data_aquisition_method": "UAV Ptgy", "area": "0.434 km\u00b2", "scenes": 1, "points_million": 7.46}, {"name": "UrbanBIS", "data_aquisition_method": "UAV Ptgy", "area": "10.78 km\u00b2", "scenes": 6, "points_million": 2523.8}, {"name": "nuScenes panoptic", "data_aquisition_method": "MLS", "area": null, "scenes": 1000, "points_million": 1400}, {"name": "FRACTAL", "data_aquisition_method": "ALS", "area": "250 km\u00b2", "scenes": 100000, "points_million": 9261}, {"name": "KITTI-360", "data_aquisition_method": "MLS", "area": "73.7 km", "scenes": null, "points_million": 1000}]]
{"sources": ["https://arxiv.org/pdf/2305.02627"], "eval_info": {"primary_keys": ["name"], "ignore_keys": ["scenes"], "note": {"area": "tolerance: \u00b11 km or \u00b1 1 km\u00b2", "points_million": "tolerance: 10% of the ground truth value"}, "main_claims": ["name"]}}
flights
40
In which flight incident did a commercial airliner perform an unusually high number of go-arounds before safely landing? For each attempt, add an entry using the following format: ```json [ { "time_utc": <timestamp>, "attempt_number": <attempt_number>, "airport": <airport_name>, "runway_number": <runway_number>, "runway_aids_available": <runway_aids>, "visibility_minima_for_aid": <visibility_minima>, "actual_visibility": <actual_visibility>, "remaining_fuel_estimate": <remaining_fuel> }, ... ] ```
[[{"time_(utc)": "2358", "attempt_#": "1", "airport": "Cochin", "runway_#": "27", "runway_aid_available": "ILS Cat I", "visibility_minima_(for_aid)": "650m RVR, 320ft DA", "actual_visibility": "3500m", "remaining_fuel_(estimate)": "4699 kg"}, {"time_(utc)": "0017", "attempt_#": "2", "airport": "Cochin", "runway_#": "27", "runway_aid_available": "ILS Cat I", "visibility_minima_(for_aid)": "650m RVR, 320ft DA", "actual_visibility": "2500m", "remaining_fuel_(estimate)": "3919 kg"}, {"time_(utc)": "0050", "attempt_#": "3", "airport": "Cochin", "runway_#": "27", "runway_aid_available": "ILS Cat I", "visibility_minima_(for_aid)": "650m RVR, 320ft DA", "actual_visibility": "1500m", "remaining_fuel_(estimate)": "2644 kg"}, {"time_(utc)": "0119", "attempt_#": "4", "airport": "Trivandrum", "runway_#": "14", "runway_aid_available": "VOR", "visibility_minima_(for_aid)": "2100m RVR, 560ft DA", "actual_visibility": "2000m", "remaining_fuel_(estimate)": "1324 kg"}, {"time_(utc)": "0127", "attempt_#": "5", "airport": "Trivandrum", "runway_#": "14", "runway_aid_available": "VOR", "visibility_minima_(for_aid)": "2100m RVR, 560ft DA", "actual_visibility": "2000m", "remaining_fuel_(estimate)": "898 kg"}, {"time_(utc)": "0132", "attempt_#": "6", "airport": "Trivandrum", "runway_#": "14", "runway_aid_available": "VOR", "visibility_minima_(for_aid)": "2100m RVR, 560ft DA", "actual_visibility": "2000m", "remaining_fuel_(estimate)": "662 kg"}, {"time_(utc)": "0139", "attempt_#": "7", "airport": "Trivandrum", "runway_#": "32", "runway_aid_available": "VOR", "visibility_minima_(for_aid)": "-", "actual_visibility": "-", "remaining_fuel_(estimate)": "349 kg"}]]
{"sources": ["https://asn.flightsafety.org/reports/2015/20150818_B738_VT-JFA.pdf"], "eval_info": {"primary_keys": ["time_utc", "attempt_number"], "note": {"time_utc": "\u00b15 minutes", "visibility_minima_for_aid": "\u00b1200m", "remaining_fuel_estimate": "\u00b1200 kg"}, "main_claims": ["time_utc", "attempt_number"]}}
prior-art
86
_I have the following ideas for a research paper. Can you help identify if this has already been done or implied in full or in parts in other papers? Give your answer as a **JSON with three fields: Paper title, link, and connection a field** that quotes exact sentences from the paper and parts of my ideas below to make the case._ Here is the format: ```json [ { "title": <paper title>, "link": <link to the paper>, "connection": <connection field> } ] ``` We develop a comprehensive evaluation framework for eliciting reasoning mistakes in LLMs. We explore aspects of mistake correction in LLMs, as well as address a distinct and critical dimension: explicit mistake detection in reasoning chains. In addition to exploring the implicit self-correction abilities, which focus on a model’s capacity to evaluate or refine its own responses, we also explicitly evaluate whether models can identify errors in reasoning chains—whether these errors originate from the same model or other models. Specifically, our work investigates intrinsic self-correction by measuring a model's ability to judge the correctness of its own generated answers. We focus on distinguishing among self-generated responses and selecting the most appropriate one. Furthermore, our work probes a more foundational question: Can LLMs reliably detect mistakes in a given reasoning chain? We argue that mistake detection is a precursor to effective self-correction, as the ability to robustly identify errors demonstrates higher-order reasoning capabilities. Our findings reveal two novel insights: Current models, including state-of-the-art ones, exhibit significant weaknesses in mistake detection, performing inconsistently across both simple and complex problems. When a model successfully identifies mistakes, its subsequent ability to rectify or self-correct those mistakes improves, suggesting a strong interdependence between mistake detection and correction.
[[{"title": "Exposing the Achilles' Heel: Evaluating LLMs Ability to Handle Mistakes in Mathematical Reasoning", "url": "https://openreview.net/forum?id=uDZ9d4UAUh"}, {"title": "Large Language Models Cannot Self-Correct Reasoning Yet", "url": "https://arxiv.org/abs/2310.01798"}, {"title": "SELF-[IN]CORRECT: LLMs Struggle with Discriminating Self-Generated Responses", "url": "https://arxiv.org/abs/2404.04298"}, {"title": "LLMs cannot find reasoning errors, but can correct them given the error location", "url": "https://arxiv.org/abs/2311.08516"}, {"title": "LLMs can Find Mathematical Reasoning Mistakes by Pedagogical Chain-of-Thought", "url": "https://arxiv.org/abs/2405.06705"}, {"title": "SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights", "url": "https://arxiv.org/html/2410.09008v1"}]]
{"sources": [], "eval_info": {}}
prior-art
99
_I have the following ideas for a research paper. Can you help identify if this has already been done or implied in full or in parts in other papers? Give your answer as a **JSON with three fields: Paper title, link, and connection a field** that quotes exact sentences from the paper and parts of my ideas below to make the case._ Here is the format: ```json [ { "title": <paper title>, "link": <link to the paper>, "connection": <connection field> } ] ``` We present a theoretical analysis of the training dynamics of vision-language models with nonlinear activation functions. In particular, we provide a rigorous justification for the effectiveness of synthetic text captions in improving pre-training performance. To analyze the impact of misaligned image-text pairs, we consider a one-hidden-layer neural network model, and show that neurons trained on noisy data tend to learn a mixture of true and spurious features. To constrast, we also provide an analysis when the models are restricted to be linear for both text and image encoders. For such linear models, we show that the training dynamics can be studied using singular value decomposition. But when both the text and image encoders are nonlinear, the analysis becomes more involved --- we analyze the behavior of nonlinear activations across three distinct training stage, as well as the non-convex interactions between modalities. We also provide a robust theoretical analysis of noisy labels' effects on contrastive learning generalization, offering insights into inductive biases necessary for robustness. Further we investigate how contrastive learning inherently enhances robustness against label noise and explain why learned representations are less sensitive to noisy labels.
[[{"title": "Theoretical Analysis of Contrastive Learning in Vision-Language Model Pretraining: The Role of Synthetic Text Captions for Feature Alignment", "url": "https://openreview.net/forum?id=hgAAXdv8q8"}, {"title": "Understanding Multimodal Contrastive Learning and Incorporating Unpaired Data", "url": "https://arxiv.org/abs/2302.06232"}, {"title": "Understanding Contrastive Learning Requires Incorporating Inductive Biases", "url": "https://arxiv.org/abs/2202.14037"}, {"title": "Investigating Why Contrastive Learning Benefits Robustness Against Label Noise", "url": "https://arxiv.org/abs/2201.12498"}, {"title": "On the Role of Label Noise in the Feature Learning Process", "url": "https://openreview.net/pdf?id=kwHvs1UdTM"}, {"title": "On the Robustness of Multimodal Contrastive Learning to Distribution Shifts", "url": "https://arxiv.org/pdf/2310.04971"}]]
{"sources": [], "eval_info": {}}

Dataset Card for LiveDRBench: Deep Research as Claim Discovery

Arxiv Paper | Hugging Face Dataset | Evaluation Code

We propose a formal characterization of the deep research (DR) problem and introduce a new benchmark, LiveDRBench, to evaluate the performance of DR systems. To enable objective evaluation, we define DR using an intermediate output representation that encodes key claims uncovered during search—separating the reasoning challenge from surface-level report generation.

Dataset Details

The benchmark consists of 100 challenging DR tasks over scientific topics (e.g., dataset discovery, materials discovery, novelty search, prior art discovery) and public interest events (e.g, the Oscars). The data was collected between May-June 2025. We plan to keep the benchmark live, and release periodic updates with new tasks.

Each task consists of (a) a prompt with a short description of the task and the expected output format; and (b) ground-truth JSON containing the claims and references that should be uncovered. We also include an evaluation script for computing the performance of DR systems using information-retrieval metrics namely precision, recall, and F1 scores.

The benchmark contains eight categories: SciFacts-Geo, SciFacts-Materials, NovelDatasets identification, NovelDatasets identification and extraction, NovelDatasets peer retrieval, PriorArt search, Entities, and Flight incidents. The evaluation code for the benchmark can be obtained at Github.

A detailed discussion of LiveDRBench, including how it was developed and tested, can be found in our Arxiv paper.

Usage

To use LiveDRBench's questions, you can load the benchmark using the Hugging Face datasets library:

from datasets import load_dataset

livedrbench = load_dataset("microsoft/LiveDRBench", "v1-full")['test']

To evaluate predictions on LiveDRBench, provide a predictions file with the following JSON schema:

[
  {
    "key": str,                             // Unique identifier from livedrbench.csv
    "preds": List[List[dict | str] | dict]  // Predictions in the format specified by each question in livedrbench.csv
  },
  ...
]

Then, run the evaluation script in the GitHub repository. This script will compute precision, recall, and F1 scores for each benchmark category.

python src/evaluation.py \
  --openai_api_key YOUR_API_KEY \
  --preds_file path/to/your/predictions.json \
  [--openai_model_name gpt-4o] \
  [--num_threads 8] \
  [--debug]
  • --openai_api_key (required): Your OpenAI API key.
  • --preds_file (required): Path to the predictions JSON file.
  • --openai_model_name (optional): Model to use as judge (default: gpt-4o).
  • --num_threads (optional): Number of parallel threads (default: 8).
  • --debug (optional): Enable debug mode, without multithreading.

Intended Uses

LiveDRBench benchmark is intended to be used together with the Github repository. The code and the benchmark are being shared with the research community to facilitate reproduction of our results and foster further research in this area. LiveDRBench is intended to be used by domain experts who are independently capable of evaluating the quality of outputs before acting on them.

Out-of-scope Uses

  • LiveDRBench is not well suited for training new Deep Research models. It only provides a test set. To avoid accidental test set leakage, we encrypt the answers in the benchmark, following the procedure of BrowseComp benchmark's release.

  • LiveDRBench dataset is not as representative of all kinds of Deep Research queries, especially those that require assessing the writing quality of long reports.

  • We do not recommend using LiveDRBench repo or the dataset in commercial or real-world applications without further testing and development. They are being released for research purposes.

  • LiveDRBench should not be used in highly regulated domains where inaccurate outputs could suggest actions that lead to injury or negatively impact an individual's legal, financial, or life opportunities.

Data Creation: Problem Inversion

Creating LiveDRBench involves a problem inversion process that allows easy updation with new instances, given a set of existing reasoning problems. The first step is to find a long-context or document reasoning problem that includes a question based on the document and its ground-truth answer. In the second step, this problem is inverted to create a new question asking for an event or entity consistent with the properties mentioned in an answer. In the third step, the question is refined (e.g., more properties are added) such that it admits a unique answer. Finally, the ground-truth set of reference documents is updated in case there are additional documents that provide the same answer.

For example, existing data from the Curie benchmark consists of scientific papers and questions that could be answered based on each paper. The data was transformed to create questions that need to be answered without access to the paper, and thus involving non-trivial search and reasoning. The final ground-truth answers for each question were verified by MSR researchers.

While we aim to cover a broad set of scientific fields and world events in the dataset, the dataset primarily covers the fields of materials science, geospatial analysis, and computer science; and world events including flight incidents, the Oscars and Olympiads. We acknowledge that many scientific fields and geographic areas may not be well covered.

Note: LiveDRBench does not contain links to external data sources. LiveDRBench includes data from an existing scientific dataset, Curie. All queries are answerable using publicly available information.

Best Practices

Best performance can be achieved by connecting an API key directly to the codebase. LiveDRBench should not be the only measure of understanding the performance of a DR model. Additional methods specific to the model use case should also be used to determine the overall performance of the model.

We strongly encourage users to use LLMs that support robust Responsible AI mitigations, such as Azure Open AI (AOAI) services. Such services continually update their safety and RAI mitigations with the latest industry standards for responsible use. For more on AOAI’s best practices when employing foundations models for scripts and applications:

Users are reminded to be mindful of data privacy concerns and are encouraged to review the privacy policies associated with any models and data storage solutions interfacing with LiveDRBench.

It is the user’s responsibility to ensure that the use of LiveDRBench repo and dataset complies with relevant data protection regulations and organizational guidelines.

License

Code in this Github repository is licensed under the MIT License. The LiveDRBench dataset is released under the CDLA v2 license.

Contact

If you have suggestions or questions, please raise an issue on Github or contact us at [email protected].

Citing LiveDRBench

@inproceedings{livedrbench2025,
  title={Characterizing Deep Research: A Benchmark and Formal Definition},
  author={Java, Abhinav and Khandelwal, Ashmit and Midigeshi, Sukruta and Halfaker, Aaron and Deshpande, Amit and Goyal, Navin and Gupta, Ankur and Natarajan, Nagarajan and Sharma, Amit},
  booktitle={arXiv preprint arXiv:2508.04183},
  year={2025}
}
Downloads last month
160