Datasets:
| datasets: | |
| - llmsql-bench/llmsql-benchmark | |
| tags: | |
| - text-to-sql | |
| - benchmark | |
| - evaluation | |
| license: mit | |
| language: | |
| - en | |
| # LLMSQL Benchmark | |
| This benchmark is designed to evaluate text-to-SQL models. For usage of this benchmark see `https://github.com/LLMSQL/llmsql-benchmark`. | |
| ## Files | |
| - `tables.jsonl` — Database table metadata | |
| - `questions.jsonl` — All available questions | |
| - `train_questions.jsonl`, `val_questions.jsonl`, `test_questions.jsonl` — Data splits for finetuning, see `https://github.com/LLMSQL/llmsql-benchmark` | |
| - `create_db.sql` — SQL script to create the database schema. Optional, you do not need this file, you can download the ready .db file from the repo `llmsql-bench/llmsql-benchmark-db`. | |
| `test_output.jsonl` is **not included** in the dataset. | |
| ## Citation | |
| If you use this benchmark, please cite: | |
| ``` | |
| @inproceedings{llmsql_bench, | |
| title={LLMSQL: Upgrading WikiSQL for the LLM Era of Text-to-SQLels}, | |
| author={Pihulski, Dzmitry and Charchut, Karol and Novogrodskaia, Viktoria and Koco{'n}, Jan}, | |
| booktitle={2025 IEEE International Conference on Data Mining Workshops (ICDMW)}, | |
| pages={...}, | |
| year={2025}, | |
| organization={IEEE} | |
| } | |
| ``` | |