Datasets:

ArXiv:
License:
ReT-M2KR / README.md
dcaffo's picture
Update README.md
6a1e929 verified
|
raw
history blame
1.44 kB
---
license: mit
---
The dataset used to train and evaluate [ReT](https://www.arxiv.org/abs/2503.01980) for multimodal information retrieval. The dataset is almost the same as the original M2KR, with a few modifications:
- we exlude any data from MSMARCO, as it does not contain query images;
- we add passage images to OVEN, InfoSeek, E-VQA, and OKVQA. Refer to the paper for more details.
## Sources
- **Repository:** https://github.com/aimagelab/ReT
- **Paper:** [Recurrence-Enhanced Vision-and-Language Transformers for Robust Multimodal Document Retrieval](https://www.arxiv.org/abs/2503.01980) (CVPR 2025)
## Download images (coming soon)
1. Initialize git LFS
```
git lfs install
```
2. Clone the repository (it will take a lot)
```
git clone https://huggingface.co/datasets/aimagelab/ReT-M2KR
```
3. Decompress images (it will take a lot, again)
```
cat ret-img-{000..129}.tar.gz | tar xzf -
```
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@inproceedings{caffagni2025recurrence,
title={{Recurrence-Enhanced Vision-and-Language Transformers for Robust Multimodal Document Retrieval}},
author={Caffagni, Davide and Sarto, Sara and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2025}
}
```