PAPO_MMK12_test / README.md
PAPOGalaxy's picture
Upload dataset with properly formatted PIL images and filtered columns
67585d0 verified
metadata
dataset_info:
  features:
    - name: images
      sequence: image
    - name: problem
      dtype: string
    - name: answer
      dtype: string
  splits:
    - name: train
      num_bytes: 167434471
      num_examples: 2000
  download_size: 166955903
  dataset_size: 167434471
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

This is the official release of the training data for paper PAPO: Perception-Aware Policy Optimization for Multimodal Reasoning. (arxiv.org/abs/2507.06448)

(Optional) This dataset can be used as the val split of the training dataset for PAPO. You may find the full training dataset at PAPOGalaxy/PAPO_ViRL39K_train.

Data Source

Training

  • We adapt the multimodal benchmark TIGER-Lab/ViRL39K to construct our PAPO training dataset.

Validation (Optional)

  • (Optional) We use the test set from FanqingM/MMK12 for validation during training.
  • Note that this is solely for monitoring. We do not pick checkpoints based on this in our paper.

Dataset Structure

  • train: training set consisting of 38870 multimodal reasoning samples
  • val: validation set consisting of 2000 multimodal reasoning samples

Data Fields

  • id: data id
    • data type: String
  • problem: input question or statement
    • data type: String
  • images: input image(s)
    • data type: List
  • answer: ground-truth answer
    • data type: String

Usage

To use the full dataset with both train and val split, you may code as follows:

# Train
train_dataset = load_dataset("PAPOGalaxy/PAPO_ViRL39K_train")

# Val
val_dataset = load_dataset("PAPOGalaxy/PAPO_MMK12_test")