Datasets:

ArXiv:
License:
Dataset Viewer
The dataset viewer is not available for this dataset.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

MemoryBench Dataset

MemoryBench is a benchmark dataset designed to evaluate spatial memory and action recall in robotic manipulation. This dataset accompanies the SAM2Act+ framework, introduced in the paper SAM2Act: Integrating Visual Foundation Model with A Memory Architecture for Robotic Manipulation. For detailed task descriptions and more information about this paper, please visit SAM2Act's website.

The dataset contains scripted demonstrations for three memory-dependent tasks designed in RLBench (same version as the one used in PerAct):

  • Reopen Drawer: Tests 3D spatial memory along the z-axis.
  • Put Block Back: Evaluates 2D spatial memory along the x-y plane.
  • Rearrange Block: Requires backward reasoning based on prior actions.

Dataset Structure

The dataset is organized as follows:

data/
├── train/  # 100 episodes per task
├── test/   # 25 episodes per task
└── files/  # task files (.ttm & .py)
  • data/train/: Contains three zip files, each corresponding to one of the three tasks. Each zip file contains 100 scripted demonstrations for training.
  • data/test/: Contains the same three zip files, but each contains 25 held-out demonstrations for evaluation.
  • data/files/: Includes necessary .ttm and .py files for running evaluation.

Usage

This dataset is designed for use in the same manner as the RLBench 18 Tasks proposed by PerAct. You can follow the same usage guidelines or stay updated with SAM2Act's code repository for further instructions.

Acknowledgement

We would like to acknowledge Haoquan Fang for leading the conceptualization of MemoryBench, providing key ideas and instructions for task design, and Wilbert Pumacay for implementing the tasks and ensuring their seamless integration into the dataset. Their combined efforts, along with the oversight of Jiafei Duan and all co-authors, were essential in developing this benchmark for evaluating spatial memory in robotic manipulation.

Citation

If you use this dataset, please cite the SAM2Act paper:

@misc{fang2025sam2act,
      title={SAM2Act: Integrating Visual Foundation Model with A Memory Architecture for Robotic Manipulation}, 
      author={Haoquan Fang and Markus Grotz and Wilbert Pumacay and Yi Ru Wang and Dieter Fox and Ranjay Krishna and Jiafei Duan},
      year={2025},
      eprint={2501.18564},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2501.18564}, 
}
Downloads last month
99