Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
Libraries:
Datasets
Dask
License:
ComicsPAP / README.md
Llabres's picture
removed old data
a30fb96
|
raw
history blame
2.48 kB
---
language:
- en
pretty_name: 'Comics: Pick-A-Panel'
tags:
- comics
---
# Comics: Pick-A-Panel
This is the dataset for the [ICDAR 2025 Competition on Comics Understanding in the Era of Foundational Models](https://rrc.cvc.uab.es/?ch=31&com=introduction)
The dataset contains five subtask or skills:
### Sequence Filling
![Sequence Filling](figures/seq_filling.png)
<details>
<summary>Task Description</summary>
Given a sequence of comic panels, a missing panel, and a set of option panels, the task is to select the panel that best fits the sequence.
</details>
### Character Coherence, Visual Closure, Text Closure
![Character Coherence](figures/closure.png)
<details>
<summary>Task Description</summary>
These skills require understanding the context sequence to then pick the best panel to continue the story, focusing on the characters, the visual elements, and the text:
- Character Coherence: Given a sequence of comic panels, pick the panel from the two options that best continues the story in a coherent with the characters. Both options are the same panel, but the text in the speech bubbles is has been swapped.
- Visual Closure: Given a sequence of comic panels, pick the panel from the options that best continues the story in a coherent way with the visual elements.
- Text Closure: Given a sequence of comic panels, pick the panel from the options that best continues the story in a coherent way with the text. All options are the same panel, but with text in the speech retrieved from different panels.
</details>
### Caption Relevance
![Caption Relevance](figures/caption_relevance.png)
<details>
<summary>Task Description</summary>
Given a caption from the previous panel, select the panel that best continues the story.
</details>
## Loading the Data
```python
from datasets import load_dataset
skill = "sequence_filling" # "sequence_filling", "char_coherence", "visual_closure", "text_closure", "caption_relevance"
split = "val" # "val", "test"
dataset = load_dataset("VLR-CVC/ComPAP", skill, split=split)
```
<details>
<summary>Map to single images</summary>
If your model can only process single images, you can render each sample as a single image:
_coming soon_
</details>
## Summit Results and Leaderboard
The competition is hosted in the [Robust Reading Competition website](https://rrc.cvc.uab.es/?ch=31&com=introduction) and the leaderboard is available [here](https://rrc.cvc.uab.es/?ch=31&com=evaluation).
## Citation
_coming soon_