hysdhlx nielsr HF Staff commited on
Commit
ed79424
·
verified ·
1 Parent(s): 3e432a3

Improve dataset card: Add task category, library name, tags, and sample usage (#1)

Browse files

- Improve dataset card: Add task category, library name, tags, and sample usage (3df68412e3d109aa4646629614cc2e6bc122ae99)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +32 -5
README.md CHANGED
@@ -1,15 +1,23 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - en
5
- pretty_name: LiveMCPBench
6
  size_categories:
7
  - n<1K
 
 
 
 
 
 
 
 
 
8
  configs:
9
  - config_name: default
10
  data_files:
11
  - split: test
12
- path: "tasks/tasks.json"
13
  ---
14
 
15
  <a id="readme-top"></a>
@@ -34,9 +42,27 @@ configs:
34
  </p>
35
 
36
  ## Dataset Description
37
- LiveMCPBench is a benchmark for evaluating the ability of agents to navigate and utilize a large-scale MCP toolset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
- It provides a comprehensive set of tasks that challenge agents to effectively use various tools in daily scenarios.
 
 
40
 
41
  ## Citation
42
 
@@ -51,3 +77,4 @@ If you find this project helpful, please use the following to cite it:
51
  primaryClass={cs.AI},
52
  url={https://arxiv.org/abs/2508.01780},
53
  }
 
 
1
  ---
 
2
  language:
3
  - en
4
+ license: apache-2.0
5
  size_categories:
6
  - n<1K
7
+ pretty_name: LiveMCPBench
8
+ task_categories:
9
+ - text-generation
10
+ library_name: datasets
11
+ tags:
12
+ - llm-agents
13
+ - tool-use
14
+ - benchmark
15
+ - mcp
16
  configs:
17
  - config_name: default
18
  data_files:
19
  - split: test
20
+ path: tasks/tasks.json
21
  ---
22
 
23
  <a id="readme-top"></a>
 
42
  </p>
43
 
44
  ## Dataset Description
45
+ LiveMCPBench is the first comprehensive benchmark designed to evaluate LLM agents at scale across diverse Model Context Protocol (MCP) servers. It comprises 95 real-world tasks grounded in the MCP ecosystem, challenging agents to effectively use various tools in daily scenarios within complex, tool-rich, and dynamic environments. To support scalable and reproducible evaluation, LiveMCPBench is complemented by LiveMCPTool (a diverse collection of 70 MCP servers and 527 tools) and LiveMCPEval (an LLM-as-a-Judge framework for automated and adaptive evaluation). The benchmark offers a unified framework for benchmarking LLM agents in realistic, tool-rich, and dynamic MCP environments, laying a solid foundation for scalable and reproducible research on agent capabilities.
46
+
47
+ ## Dataset Structure
48
+ The dataset consists of `tasks.json`, which contains the 95 real-world tasks used for benchmarking LLM agents.
49
+
50
+ ## Sample Usage
51
+
52
+ You can load the dataset using the Hugging Face `datasets` library:
53
+
54
+ ```python
55
+ from datasets import load_dataset
56
+
57
+ # Load the dataset
58
+ dataset = load_dataset("ICIP/LiveMCPBench")
59
+
60
+ # Print the dataset structure
61
+ print(dataset)
62
 
63
+ # Access an example from the test split
64
+ print(dataset["test"][0])
65
+ ```
66
 
67
  ## Citation
68
 
 
77
  primaryClass={cs.AI},
78
  url={https://arxiv.org/abs/2508.01780},
79
  }
80
+ ```