File size: 4,778 Bytes
4cd08c5
 
 
98b5a01
4cd08c5
 
 
98b5a01
4cd08c5
98b5a01
 
 
 
 
 
 
 
 
 
4cd08c5
 
3158d11
 
4cd08c5
3158d11
 
 
 
4cd08c5
 
 
 
 
 
 
b42694f
 
 
 
 
 
 
 
 
 
 
e0b1620
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
---
dataset_info:
  features:
  - name: ep_id
    dtype: string
  - name: video
    dtype: string
  - name: question
    dtype: string
  - name: answer
    dtype: string
  - name: task_id
    dtype: string
  - name: high_level_category
    dtype: string
  - name: low_level_category
    dtype: string
  - name: num_interactions
    dtype: int64
  splits:
  - name: train
    num_bytes: 107506980
    num_examples: 79213
  - name: validation
    num_bytes: 9653447
    num_examples: 5870
  download_size: 14758637
  dataset_size: 117160427
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- robotics
- embodied-ai
pretty_name: findingdory
size_categories:
- 10K<n<100K
---
<center>
<a href="https://arxiv.org/abs/2506.15635" target="_blank">
    <img alt="arXiv" src="https://img.shields.io/badge/arXiv-FindingDory-red?logo=arxiv" height="20" />
</a>
<a href="https://findingdory-benchmark.github.io/" target="_blank">
    <img alt="Website" src="https://img.shields.io/badge/🌎_Website-FindingDory-blue.svg" height="20" />
</a>
<a href="https://github.com/findingdory-benchmark/findingdory-trl" target="_blank">
    <img alt="GitHub Code" src="https://img.shields.io/badge/Code-FindingDory--TRL-white?&logo=github&logoColor=white" />
</a>
<a href="https://huggingface.co/yali30/findingdory-qwen2.5-VL-3B-finetuned" target="_blank"">
    <img alt="Huggingface Model" src="https://img.shields.io/badge/Model-FindingDory-yellow?logo=huggingface" />
</a>
</center>

<center><h1>FindingDory: A Benchmark to Evaluate Memory in Embodied Agents</h1>
  <a href="https://www.karmeshyadav.com/">Karmesh Yadav*</a>,
  <a href="https://yusufali98.github.io/">Yusuf Ali*</a>,
  <a href="https://gunshigupta.netlify.app/">Gunshi Gupta</a>,
  <a href="https://www.cs.ox.ac.uk/people/yarin.gal/website/">Yarin Gal</a>,
  <a href="https://faculty.cc.gatech.edu/~zk15/">Zsolt Kira</a>
</center>

Current vision-language models (VLMs) struggle with long-term memory in embodied tasks. To address this, we introduce **FindingDory**, a benchmark in Habitat that evaluates memory-based reasoning across 60 long-horizon tasks. 

In this repo, we release the FindingDory Video Dataset. Each video contains images collected from a robot’s egocentric view as it navigates realistic indoor environments and interacts with objects. This dataset was used to train and evaluate the high-level agent SFT agent in the FindingDory benchmark.

# Usage
```
from datasets import load_dataset
dataset = load_dataset("yali30/findingdory")
```

# Dataset Structure

| Field name                 | Description                                                                                                   |
| ------------------------- | ------------------------------------------------------------------------------------------------------------- |
| **ep\_id**                  | Episode id.                                                                                                                                                   |
| **video**                    | Relative path of the video clip.                                                                                                           |
| **question**             | Question posed to the agent based on the episode.                                                                |
| **answer**                | Ground-truth answer stored as a list of image indices                                                             |
| **task\_id**              | Identifier indicating which task template the episode belongs to (string).                     |
| **high\_level\_category** | Higl-task task category label. (Options: Single-Goal Spatial Tasks, Single-Goal Temporal Tasks, Multi-Goal Tasks). |
| **low\_level\_category**   | Fine-grained task category label. (Example categories: Interaction-Order, Room Visitation, etc)    |
| **num\_interactions**        | Number of objects the robot interacts with, during the experience collection. |

Notes:
* The validation split contains 60 tasks . The training split only contains 55 task because the 5 “Object Attributes” tasks are withheld from the training set.
* A subsampled version of the dataset (96 frames per episode) is available [here](https://huggingface.co/datasets/yali30/findingdory-subsampled-96).

📄 Citation
```
@article{yadav2025findingdory,
  title     = {FindingDory: A Benchmark to Evaluate Memory in Embodied Agents},
  author    = {Yadav, Karmesh and Ali, Yusuf and Gupta, Gunshi and Gal, Yarin and Kira, Zsolt},
  journal   = {arXiv preprint arXiv:2506.15635},
  year      = {2025}
}
```