LiveMCPBench / README.md
hysdhlx's picture
Update task category to image-text-to-text (#3)
ddea2d2 verified
metadata
language:
  - en
license: apache-2.0
size_categories:
  - n<1K
task_categories:
  - image-text-to-text
pretty_name: LiveMCPBench
library_name: datasets
tags:
  - llm-agents
  - tool-use
  - benchmark
  - mcp
configs:
  - config_name: default
    data_files:
      - split: test
        path: tasks/tasks.json


LiveMCPBench: Can Agents Navigate an Ocean of MCP Tools?

Benchmarking the agent in real-world tasks within a large-scale MCP toolset.

🌐 Website   |   📄 Paper   |   💻 Code   |   🏆 Leaderboard   |   🙏 Citation

Dataset Description

LiveMCPBench is the first comprehensive benchmark designed to evaluate LLM agents at scale across diverse Model Context Protocol (MCP) servers. It comprises 95 real-world tasks grounded in the MCP ecosystem, challenging agents to effectively use various tools in daily scenarios within complex, tool-rich, and dynamic environments. To support scalable and reproducible evaluation, LiveMCPBench is complemented by LiveMCPTool (a diverse collection of 70 MCP servers and 527 tools) and LiveMCPEval (an LLM-as-a-Judge framework that enables automated and adaptive evaluation). The benchmark offers a unified framework for benchmarking LLM agents in realistic, tool-rich, and dynamic MCP environments, laying a solid foundation for scalable and reproducible research on agent capabilities.

Dataset Structure

The dataset consists of tasks.json, which contains the 95 real-world tasks used for benchmarking LLM agents.

Sample Usage

You can load the dataset using the Hugging Face datasets library:

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("ICIP/LiveMCPBench")

# Print the dataset structure
print(dataset)

# Access an example from the test split
print(dataset["test"][0])

Citation

If you find this project helpful, please use the following to cite it:

@misc{mo2025livemcpbenchagentsnavigateocean,
      title={LiveMCPBench: Can Agents Navigate an Ocean of MCP Tools?}, 
      author={Guozhao Mo and Wenliang Zhong and Jiawei Chen and Xuanang Chen and Yaojie Lu and Hongyu Lin and Ben He and Xianpei Han and Le Sun},
      year={2025},
      eprint={2508.01780},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2508.01780}, 
}