Datasets:
Populate dataset card for RA-HMD with metadata, links, description, and usage (#1)
Browse files- Populate dataset card for RA-HMD with metadata, links, description, and usage (a5259321c47e47584f9dc234809600afdd3da499)
Co-authored-by: Niels Rogge <[email protected]>
README.md
ADDED
|
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-text-to-text
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
tags:
|
| 8 |
+
- hateful-memes
|
| 9 |
+
- multimodal
|
| 10 |
+
- retrieval-augmented-generation
|
| 11 |
+
- vision-language
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# RA-HMD Dataset
|
| 15 |
+
|
| 16 |
+
This repository contains the dataset for the paper [Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme Detection](https://huggingface.co/papers/2502.13061).
|
| 17 |
+
|
| 18 |
+
This dataset supports the development of robust automated detection systems for hateful memes. It is designed to enhance in-domain accuracy and cross-domain generalization for Large Multimodal Models (LMMs) while preserving their general vision-language capabilities. The data provided includes the original datasets and a converted format suitable for use with the [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) framework for stage 1 training of the RA-HMD model.
|
| 19 |
+
|
| 20 |
+
For more details and related resources:
|
| 21 |
+
- **Paper**: [Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme Detection](https://huggingface.co/papers/2502.13061)
|
| 22 |
+
- **Code (GitHub)**: https://github.com/JingbiaoMei/RGCL
|
| 23 |
+
- **Project Page**: https://rgclmm.github.io/
|
| 24 |
+
|
| 25 |
+
### Sample Usage
|
| 26 |
+
|
| 27 |
+
The following instructions are derived from the [GitHub repository](https://github.com/JingbiaoMei/RGCL) and show how to set up the environment and generate embeddings for the dataset.
|
| 28 |
+
|
| 29 |
+
#### Setup Environment for RA-HMD
|
| 30 |
+
|
| 31 |
+
```bash
|
| 32 |
+
git clone https://github.com/JingbiaoMei/RGCL.git
|
| 33 |
+
cd RGCL/LLAMA-FACTORY
|
| 34 |
+
conda create -n llamafact python=3.10
|
| 35 |
+
conda activate llamafact
|
| 36 |
+
pip install -e ".[torch,metrics,deepspeed,liger-kernel,bitsandbytes,qwen]"
|
| 37 |
+
pip install torchmetrics wandb easydict
|
| 38 |
+
pip install qwen_vl_utils torchvision
|
| 39 |
+
# Install FAISS
|
| 40 |
+
conda install -c pytorch -c nvidia faiss-gpu=1.7.4 mkl=2021 blas=1.0=mkl
|
| 41 |
+
```
|
| 42 |
+
|
| 43 |
+
#### Dataset Preparation - Generate CLIP Embedding
|
| 44 |
+
|
| 45 |
+
First, ensure image data is copied into `./data/image/dataset_name/All` and annotation data (`jsonl`) into `./data/gt/dataset_name`. Then, generate CLIP embeddings:
|
| 46 |
+
|
| 47 |
+
```shell
|
| 48 |
+
python3 src/utils/generate_CLIP_embedding_HF.py --dataset "FB"
|
| 49 |
+
python3 src/utils/generate_CLIP_embedding_HF.py --dataset "HarMeme"
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
#### Dataset Preparation - Generate ALIGN Embedding
|
| 53 |
+
|
| 54 |
+
Similarly, generate ALIGN embeddings:
|
| 55 |
+
|
| 56 |
+
```shell
|
| 57 |
+
python3 src/utils/generate_ALIGN_embedding_HF.py --dataset "FB"
|
| 58 |
+
python3 src/utils/generate_ALIGN_embedding_HF.py --dataset "HarMeme"
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
### Citation
|
| 62 |
+
|
| 63 |
+
If you use this dataset in your research, please kindly cite the corresponding paper:
|
| 64 |
+
|
| 65 |
+
```bibtex
|
| 66 |
+
@article{RAHMD2025Mei,
|
| 67 |
+
title={Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme Detection},
|
| 68 |
+
url={http://arxiv.org/abs/2502.13061},
|
| 69 |
+
DOI={10.48550/arXiv.2502.13061},
|
| 70 |
+
note={arXiv:2502.13061 [cs]},
|
| 71 |
+
number={arXiv:2502.13061},
|
| 72 |
+
publisher={arXiv},
|
| 73 |
+
author={Mei, Jingbiao and Chen, Jinghong and Yang, Guangyu and Lin, Weizhe and Byrne, Bill},
|
| 74 |
+
year={2025},
|
| 75 |
+
month=may
|
| 76 |
+
}
|
| 77 |
+
```
|