Datasets:

Modalities:
Image
Languages:
English
ArXiv:
License:
katebor commited on
Commit
4170229
·
verified ·
1 Parent(s): 234feaf

update citation info and link to paper

Browse files
Files changed (1) hide show
  1. README.md +24 -8
README.md CHANGED
@@ -29,13 +29,15 @@ size_categories:
29
  # TableEval dataset
30
 
31
  [![GitHub](https://img.shields.io/badge/GitHub-000000?style=flat&logo=github&logoColor=white)](https://github.com/esborisova/TableEval-Study)
 
32
  [![arXiv](https://img.shields.io/badge/arXiv-darkred)](https://arxiv.org/abs/2507.00152)
33
 
 
34
  **TableEval** is developed to benchmark and compare the performance of (M)LLMs on tables from scientific vs. non-scientific sources, represented as images vs. text.
35
  It comprises six data subsets derived from the test sets of existing benchmarks for question answering (QA) and table-to-text (T2T) tasks, containing a total of **3017 tables** and **11312 instances**.
36
  The scienfific subset includes tables from pre-prints and peer-reviewed scholarly publications, while the non-scientific subset involves tables from Wikipedia and financial reports.
37
  Each table is available as a **PNG** image and in four textual formats: **HTML**, **XML**, **LaTeX**, and **Dictionary (Dict)**.
38
- All task annotations are taken from the source datasets. Please, refer to the [Table Understanding and (Multimodal) LLMs: A Cross-Domain Case Study on Scientific vs. Non-Scientific Data](https://arxiv.org/abs/2507.00152) paper for more details.
39
 
40
 
41
  ## Overview and statistics
@@ -105,16 +107,30 @@ For more details on each subset, please, refer to the respective README.md files
105
  ## Citation
106
 
107
  ```
108
- @inproceedings{borisova-ekaterina-2025,
109
- title = "Table Understanding and (Multimodal) LLMs: A Cross-Domain Case Study on Scientific vs. Non-Scientific Data",
110
- author = "Borisova, Ekaterina and Barth, Fabio and Feldhus, Nils and
111
- Ahmad, Raia Abu and Ostendorff, Malte and Ortiz Suarez, Pedro and
112
- Rehm, Georg and Möller, Sebastian",
113
- booktitle = "Proceedings of the 4th Workshop on Table Representation Learning (TRL)",
 
 
 
 
 
 
 
 
 
 
 
114
  year = "2025",
115
  address = "Vienna, Austria",
116
  publisher = "Association for Computational Linguistics",
117
- comment = "accepted"
 
 
 
118
  }
119
  ```
120
 
 
29
  # TableEval dataset
30
 
31
  [![GitHub](https://img.shields.io/badge/GitHub-000000?style=flat&logo=github&logoColor=white)](https://github.com/esborisova/TableEval-Study)
32
+ [![ACL](https://img.shields.io/badge/ACL-red)](https://aclanthology.org/2025.trl-1.10/)
33
  [![arXiv](https://img.shields.io/badge/arXiv-darkred)](https://arxiv.org/abs/2507.00152)
34
 
35
+
36
  **TableEval** is developed to benchmark and compare the performance of (M)LLMs on tables from scientific vs. non-scientific sources, represented as images vs. text.
37
  It comprises six data subsets derived from the test sets of existing benchmarks for question answering (QA) and table-to-text (T2T) tasks, containing a total of **3017 tables** and **11312 instances**.
38
  The scienfific subset includes tables from pre-prints and peer-reviewed scholarly publications, while the non-scientific subset involves tables from Wikipedia and financial reports.
39
  Each table is available as a **PNG** image and in four textual formats: **HTML**, **XML**, **LaTeX**, and **Dictionary (Dict)**.
40
+ All task annotations are taken from the source datasets. Please, refer to the [Table Understanding and (Multimodal) LLMs: A Cross-Domain Case Study on Scientific vs. Non-Scientific Data](https://aclanthology.org/2025.trl-1.10/) paper for more details.
41
 
42
 
43
  ## Overview and statistics
 
107
  ## Citation
108
 
109
  ```
110
+ @inproceedings{borisova-etal-2025-table,
111
+ title = "Table Understanding and (Multimodal) {LLM}s: A Cross-Domain Case Study on Scientific vs. Non-Scientific Data",
112
+ author = {Borisova, Ekaterina and
113
+ Barth, Fabio and
114
+ Feldhus, Nils and
115
+ Abu Ahmad, Raia and
116
+ Ostendorff, Malte and
117
+ Ortiz Suarez, Pedro and
118
+ Rehm, Georg and
119
+ M{\"o}ller, Sebastian},
120
+ editor = "Chang, Shuaichen and
121
+ Hulsebos, Madelon and
122
+ Liu, Qian and
123
+ Chen, Wenhu and
124
+ Sun, Huan",
125
+ booktitle = "Proceedings of the 4th Table Representation Learning Workshop",
126
+ month = jul,
127
  year = "2025",
128
  address = "Vienna, Austria",
129
  publisher = "Association for Computational Linguistics",
130
+ url = "https://aclanthology.org/2025.trl-1.10/",
131
+ pages = "109--142",
132
+ ISBN = "979-8-89176-268-8",
133
+ abstract = "Tables are among the most widely used tools for representing structured data in research, business, medicine, and education. Although LLMs demonstrate strong performance in downstream tasks, their efficiency in processing tabular data remains underexplored. In this paper, we investigate the effectiveness of both text-based and multimodal LLMs on table understanding tasks through a cross-domain and cross-modality evaluation. Specifically, we compare their performance on tables from scientific vs. non-scientific contexts and examine their robustness on tables represented as images vs. text. Additionally, we conduct an interpretability analysis to measure context usage and input relevance. We also introduce the TableEval benchmark, comprising 3017 tables from scholarly publications, Wikipedia, and financial reports, where each table is provided in five different formats: Image, Dictionary, HTML, XML, and LaTeX. Our findings indicate that while LLMs maintain robustness across table modalities, they face significant challenges when processing scientific tables."
134
  }
135
  ```
136