File size: 2,873 Bytes
94617f7
 
 
ed79424
94617f7
 
ed79424
ddea2d2
 
ed79424
 
 
 
 
 
7c72dd1
 
 
 
ed79424
3e432a3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ddea2d2
ed79424
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3e432a3
ed79424
 
 
3e432a3
 
 
 
 
 
 
 
 
 
 
 
 
 
ed79424
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
language:
- en
license: apache-2.0
size_categories:
- n<1K
task_categories:
- image-text-to-text
pretty_name: LiveMCPBench
library_name: datasets
tags:
- llm-agents
- tool-use
- benchmark
- mcp
configs:
- config_name: default
  data_files:
  - split: test
    path: tasks/tasks.json
---

<a id="readme-top"></a>

<!-- PROJECT -->
<br />
<div align="center">
  <h3 align="center">LiveMCPBench: Can Agents Navigate an Ocean of MCP Tools?</h3>

  <p align="center">
    Benchmarking the agent in real-world tasks within a large-scale MCP toolset.
  </p>
</div>

<p align="center">
  🌐 <a href="https://icip-cas.github.io/LiveMCPBench" target="_blank">Website</a> &nbsp; | &nbsp;
  📄 <a href="https://arxiv.org/abs/2508.01780" target="_blank">Paper</a> &nbsp; | &nbsp;
  💻 <a href="https://github.com/icip-cas/LiveMCPBench" target="_blank">Code</a> &nbsp; | &nbsp;
  🏆 <a href="https://docs.google.com/spreadsheets/d/1EXpgXq1VKw5A7l7-N2E9xt3w0eLJ2YPVPT-VrRxKZBw/edit?usp=sharing" target="_blank">Leaderboard</a> 
  &nbsp; | &nbsp;
  🙏 <a href="#citation" target="_blank">Citation</a>
</p>

## Dataset Description
LiveMCPBench is the first comprehensive benchmark designed to evaluate LLM agents at scale across diverse Model Context Protocol (MCP) servers. It comprises 95 real-world tasks grounded in the MCP ecosystem, challenging agents to effectively use various tools in daily scenarios within complex, tool-rich, and dynamic environments. To support scalable and reproducible evaluation, LiveMCPBench is complemented by LiveMCPTool (a diverse collection of 70 MCP servers and 527 tools) and LiveMCPEval (an LLM-as-a-Judge framework that enables automated and adaptive evaluation). The benchmark offers a unified framework for benchmarking LLM agents in realistic, tool-rich, and dynamic MCP environments, laying a solid foundation for scalable and reproducible research on agent capabilities.

## Dataset Structure
The dataset consists of `tasks.json`, which contains the 95 real-world tasks used for benchmarking LLM agents.

## Sample Usage

You can load the dataset using the Hugging Face `datasets` library:

```python
from datasets import load_dataset

# Load the dataset
dataset = load_dataset("ICIP/LiveMCPBench")

# Print the dataset structure
print(dataset)

# Access an example from the test split
print(dataset["test"][0])
```

## Citation

If you find this project helpful, please use the following to cite it:
```bibtex
@misc{mo2025livemcpbenchagentsnavigateocean,
      title={LiveMCPBench: Can Agents Navigate an Ocean of MCP Tools?}, 
      author={Guozhao Mo and Wenliang Zhong and Jiawei Chen and Xuanang Chen and Yaojie Lu and Hongyu Lin and Ben He and Xianpei Han and Le Sun},
      year={2025},
      eprint={2508.01780},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2508.01780}, 
}
```