llmsql-benchmark / README.md
pihull's picture
Update README.md
6109157 verified
metadata
datasets:
  - llmsql-bench/llmsql-benchmark
tags:
  - text-to-sql
  - benchmark
  - evaluation
license: mit
language:
  - en
bibtex:
  - >-
    @article{pihulski2025llmsql, title={LLMSQL: Upgrading WikiSQL for the LLM
    Era of Text-to-SQL}, author={Dzmitry Pihulski and Karol Charchut and
    Viktoria Novogrodskaia and Jan Kocoń}, journal={arXiv preprint
    arXiv:2510.02350}, year={2025}, url={https://arxiv.org/abs/2510.02350} }
task_categories:
  - question-answering
  - text-generation
pretty_name: LLMSQL Benchmark
size_categories:
  - 10K<n<100K
repository: https://github.com/LLMSQL/llmsql-benchmark

LLMSQL Benchmark

This benchmark is designed to evaluate text-to-SQL models. For usage of this benchmark see https://github.com/LLMSQL/llmsql-benchmark.

Arxiv Article: https://arxiv.org/abs/2510.02350

Files

  • tables.jsonl — Database table metadata
  • questions.jsonl — All available questions
  • train_questions.jsonl, val_questions.jsonl, test_questions.jsonl — Data splits for finetuning, see https://github.com/LLMSQL/llmsql-benchmark
  • sqlite_tables.db — sqlite db with tables from tables.jsonl, created with the help of create_db_sql.
  • create_db.sql — SQL script that creates the database sqlite_tables.db.

test_output.jsonl is not included in the dataset.

Citation

If you use this benchmark, please cite:

@inproceedings{llmsql_bench,
  title={LLMSQL: Upgrading WikiSQL for the LLM Era of Text-to-SQLels},
  author={Pihulski, Dzmitry and  Charchut, Karol and Novogrodskaia, Viktoria and Koco{'n}, Jan},
  booktitle={2025 IEEE International Conference on Data Mining Workshops (ICDMW)},
  year={2025},
  organization={IEEE}
}