nielsr HF Staff commited on
Commit
e38341a
·
verified ·
1 Parent(s): ed79424

Update task category to image-text-to-text

Browse files

This PR updates the `task_categories` metadata tag from `text-generation` to `image-text-to-text`. This change better reflects the dataset's nature as a comprehensive benchmark for evaluating LLM agents that navigate and interact with a diverse set of Model Context Protocol (MCP) tools in real-world, potentially multimodal, environments, as described in the paper "LiveMCPBench: Can Agents Navigate an Ocean of MCP Tools?".

Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -4,9 +4,9 @@ language:
4
  license: apache-2.0
5
  size_categories:
6
  - n<1K
7
- pretty_name: LiveMCPBench
8
  task_categories:
9
- - text-generation
 
10
  library_name: datasets
11
  tags:
12
  - llm-agents
@@ -42,7 +42,7 @@ configs:
42
  </p>
43
 
44
  ## Dataset Description
45
- LiveMCPBench is the first comprehensive benchmark designed to evaluate LLM agents at scale across diverse Model Context Protocol (MCP) servers. It comprises 95 real-world tasks grounded in the MCP ecosystem, challenging agents to effectively use various tools in daily scenarios within complex, tool-rich, and dynamic environments. To support scalable and reproducible evaluation, LiveMCPBench is complemented by LiveMCPTool (a diverse collection of 70 MCP servers and 527 tools) and LiveMCPEval (an LLM-as-a-Judge framework for automated and adaptive evaluation). The benchmark offers a unified framework for benchmarking LLM agents in realistic, tool-rich, and dynamic MCP environments, laying a solid foundation for scalable and reproducible research on agent capabilities.
46
 
47
  ## Dataset Structure
48
  The dataset consists of `tasks.json`, which contains the 95 real-world tasks used for benchmarking LLM agents.
 
4
  license: apache-2.0
5
  size_categories:
6
  - n<1K
 
7
  task_categories:
8
+ - image-text-to-text
9
+ pretty_name: LiveMCPBench
10
  library_name: datasets
11
  tags:
12
  - llm-agents
 
42
  </p>
43
 
44
  ## Dataset Description
45
+ LiveMCPBench is the first comprehensive benchmark designed to evaluate LLM agents at scale across diverse Model Context Protocol (MCP) servers. It comprises 95 real-world tasks grounded in the MCP ecosystem, challenging agents to effectively use various tools in daily scenarios within complex, tool-rich, and dynamic environments. To support scalable and reproducible evaluation, LiveMCPBench is complemented by LiveMCPTool (a diverse collection of 70 MCP servers and 527 tools) and LiveMCPEval (an LLM-as-a-Judge framework that enables automated and adaptive evaluation). The benchmark offers a unified framework for benchmarking LLM agents in realistic, tool-rich, and dynamic MCP environments, laying a solid foundation for scalable and reproducible research on agent capabilities.
46
 
47
  ## Dataset Structure
48
  The dataset consists of `tasks.json`, which contains the 95 real-world tasks used for benchmarking LLM agents.