Datasets:
metadata
datasets:
- llmsql-bench/llmsql-benchmark
tags:
- text-to-sql
- benchmark
- evaluation
license: mit
language:
- en
bibtex:
- >-
@article{pihulski2025llmsql, title={LLMSQL: Upgrading WikiSQL for the LLM
Era of Text-to-SQL}, author={Dzmitry Pihulski and Karol Charchut and
Viktoria Novogrodskaia and Jan Kocoń}, journal={arXiv preprint
arXiv:2510.02350}, year={2025}, url={https://arxiv.org/abs/2510.02350} }
task_categories:
- question-answering
- text-generation
pretty_name: LLMSQL Benchmark
size_categories:
- 10K<n<100K
repository: https://github.com/LLMSQL/llmsql-benchmark
LLMSQL Benchmark
This benchmark is designed to evaluate text-to-SQL models. For usage of this benchmark see https://github.com/LLMSQL/llmsql-benchmark.
Arxiv Article: https://arxiv.org/abs/2510.02350
Files
tables.jsonl— Database table metadataquestions.jsonl— All available questionstrain_questions.jsonl,val_questions.jsonl,test_questions.jsonl— Data splits for finetuning, seehttps://github.com/LLMSQL/llmsql-benchmarksqlite_tables.db— sqlite db with tables fromtables.jsonl, created with the help ofcreate_db_sql.create_db.sql— SQL script that creates the databasesqlite_tables.db.
test_output.jsonl is not included in the dataset.
Citation
If you use this benchmark, please cite:
@inproceedings{llmsql_bench,
title={LLMSQL: Upgrading WikiSQL for the LLM Era of Text-to-SQLels},
author={Pihulski, Dzmitry and Charchut, Karol and Novogrodskaia, Viktoria and Koco{'n}, Jan},
booktitle={2025 IEEE International Conference on Data Mining Workshops (ICDMW)},
year={2025},
organization={IEEE}
}