| # AI Spreadsheet Benchmark Dataset | |
| The AI Spreadsheet Benchmark captures 53 realistic spreadsheet prompts spanning analysis, enrichment, visualization, and workbook-management workflows. It is designed to evaluate how spreadsheet copilots behave in situ: Do they write formulas? Do charts stay linked to data? Can the output recompute when numbers change? | |
| - **Paper:** ["The AI Spreadsheet Benchmark: Measuring Dynamic Output in Spreadsheet Assistants"](https://huggingface.co/datasets/rowshq/aispreadsheetbenchmark/blob/main/technical_paper.pdf) | |
| - **Dataset:** `rowshq/aispreadsheetbenchmark` | |
| - **Metrics:** Pass@1, Pass@3, Dynamic Output Rate, Latency | |
| ## Accessing the dataset | |
| ```python | |
| from datasets import load_dataset | |
| benchmark = load_dataset("rowshq/aispreadsheetbenchmark") | |
| questions = benchmark["questions"] # prompt text, categories, scoring metadata | |
| ``` | |
| ## Task categories | |
| | Category | Tasks | Examples | | |
| | --- | --- | --- | | |
| | Classic Data Analysis | 37 | YoY growth columns, lookups, joins, dashboards, cohort tables | | |
| | Advanced Analysis | 5 | K-means clustering, forecasting, anomaly detection, custom visualizations | | |
| | Creating Models | 2 | Interactive head-to-head calculators, investment simulators | | |
| | Manage Spreadsheet Elements | 3 | Conditional formatting, sorting, chart styling, sheet setup | | |
| | Arithmetic Operations | 6 | High-precision arithmetic sanity checks | | |
| Each prompt record contains: | |
| - Natural-language instructions | |
| - Category and sub-category labels | |
| ## Recommended evaluation protocol | |
| 1. **Reset** the workbook to the canonical dataset before each run. | |
| 2. **Issue** prompts verbatim. If the assistant asks for clarification, respond neutrally while keeping the task scope fixed. | |
| 3. **Assess success** using the published acceptance criteria (execution checks whenever possible). | |
| 4. **Assess dynamic output** by perturbing underlying data and verifying the response updates automatically (no pasted values or screenshots). | |
| 5. **Measure latency** from prompt submission to assistant completion. | |
| 6. **Compute metrics:** Pass@1, Pass@3 (up to three attempts per task), Dynamic Output Rate, mean/median latency. | |
| ## Baseline results | |
| Initial evaluation across five assistants (Rows AI Analyst, Excel Copilot, Google Sheets + Gemini, Shortcut, Julius): | |
| | Assistant | Pass@1 | Pass@3 | Dynamic (%) | Mean time (s) | | |
| | --- | --- | --- | --- | --- | | |
| | Rows AI Analyst | 89 | 92 | 74 | 220 | | |
| | Microsoft Excel Copilot | 53 | 64 | 8 | 46 | | |
| | Google Sheets + Gemini | 57 | 64 | 6 | 11 | | |
| | Shortcut | 83 | 83 | 13 | 222 | | |
| | Julius | 75 | 83 | 0 | 30 | | |
| Detailed per-category tables and visualizations appear in the accompanying technical paper. | |
| ## Citation | |
| ``` | |
| @misc{rowshq2025benchmark, | |
| title = {The AI Spreadsheet Benchmark: Measuring Intelligence in Spreadsheet Assistants}, | |
| author = {Samagaio, Álvaro Mendes and Cruz, Henrique and Pereira, Humberto Ayres and Schulz, Torben}, | |
| year = {2025}, | |
| url = {https://huggingface.co/datasets/rowshq/aispreadsheetbenchmark/blob/main/technical_paper.pdf} | |
| } | |
| ``` | |
| ## Questions & contributions | |
| Open a discussion or issue on the Hugging Face dataset page if you: | |
| - Find discrepancies in acceptance criteria or scoring instructions | |
| - Want to share new assistant baselines or evaluation tooling | |
| - Plan to extend the benchmark with additional domains or datasets | |
| We welcome contributions that improve documentation, acceptance criteria, or reproducibility assets. Reach out via the dataset page to coordinate substantial updates. | |