Datasets:

ArXiv:
License:
File size: 6,614 Bytes
49f286f
 
 
 
 
2c53a89
49f286f
 
 
 
 
 
98a8825
49f286f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18a1c5b
 
 
 
 
 
 
 
 
 
 
49f286f
32ff462
49f286f
 
 
32ff462
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49f286f
 
 
 
 
2c53a89
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---
license: apache-2.0
size_categories:
- 1K<n<10K
---

# LearnGUI: A Unified Demonstration Benchmark for Mobile GUI Agents

<div align="center">
  <img src="assets/teaser-final.drawio.png" alt="The LearnAct Framework and LearnGUI Benchmark focus on addressing the long-tail challenges in mobile GUI agent performance through demonstration-based learning." width="100%">
</div>

[📄 Paper](https://arxiv.org/abs/2504.13805) | [💻 Code](https://github.com/lgy0404/LearnAct) | [🌐 Project Page](https://lgy0404.github.io/LearnAct/)

## Overview

LearnGUI is the first comprehensive dataset specifically designed for studying demonstration-based learning in mobile GUI agents. It comprises 2,353 instructions across 73 applications with an average of 13.2 steps per task, featuring high-quality human demonstrations for both offline and online evaluation scenarios.

## 🌟 Key Features

- **Unified Benchmark Framework**: Provides standardized metrics and evaluation protocols for demonstration-based learning in mobile GUI agents
- **Dual Evaluation Modes**: Supports both offline (2,252 tasks) and online (101 tasks) evaluation scenarios to assess agent performance
- **Rich Few-shot Learning Support**: Includes k-shot combinations (k=1,2,3) for each task with varying similarity profiles
- **Multi-dimensional Similarity Metrics**: Quantifies demonstration relevance across instruction, UI, and action dimensions
- **Diverse Real-world Coverage**: Spans 73 mobile applications with 2,353 naturally varied tasks reflecting real-world usage patterns
- **Expert-annotated Trajectories**: Contains high-quality human demonstrations with detailed step-by-step action sequences and element annotations

## 📊 Dataset Structure and Statistics

The dataset is organized into three main splits:

### Dataset Statistics

| Split | K-shot | Tasks | Apps | Step actions | Avg Ins<sub>Sim</sub> | Avg UI<sub>Sim</sub> | Avg Act<sub>Sim</sub> | UI<sub>SH</sub>Act<sub>SH</sub> | UI<sub>SH</sub>Act<sub>SL</sub> | UI<sub>SL</sub>Act<sub>SH</sub> | UI<sub>SL</sub>Act<sub>SL</sub> |
|-------|--------|-------|------|-------------|------------------------|----------------------|----------------------|--------------------------------|--------------------------------|--------------------------------|--------------------------------|
| Offline-Train | 1-shot | 2,001 | 44 | 26,184 | 0.845 | 0.901 | 0.858 | 364 | 400 | 403 | 834 |
| Offline-Train | 2-shot | 2,001 | 44 | 26,184 | 0.818 | 0.898 | 0.845 | 216 | 360 | 358 | 1,067 |
| Offline-Train | 3-shot | 2,001 | 44 | 26,184 | 0.798 | 0.895 | 0.836 | 152 | 346 | 310 | 1,193 |
| Offline-Test | 1-shot | 251 | 9 | 3,469 | 0.798 | 0.868 | 0.867 | 37 | 49 | 56 | 109 |
| Offline-Test | 2-shot | 251 | 9 | 3,469 | 0.767 | 0.855 | 0.853 | 15 | 42 | 55 | 139 |
| Offline-Test | 3-shot | 251 | 9 | 3,469 | 0.745 | 0.847 | 0.847 | 10 | 36 | 49 | 156 |
| Online-Test | 1-shot | 101 | 20 | 1,423 | - | - | - | - | - | - | - |

Each task in LearnGUI contains:
- High-level instruction
- Low-level action sequences
- Screenshot of each step
- UI element details
- Ground truth action labels
- Demonstration pairings with varying similarity profiles

## 📁 Directory Structure

```
LearnGUI/
├── offline/                            # Offline evaluation dataset
│   ├── screenshot.zip                  # Screenshot archives (multi-part)
│   ├── screenshot.z01-z05              # Screenshot archive parts
│   ├── element_anno.zip                # Element annotations
│   ├── instruction_anno.zip            # Instruction annotations
│   ├── task_spilit.json                # Task splitting information
│   └── low_level_instructions.json     # Detailed step-by-step instructions

└── online/                             # Online evaluation dataset
    ├── low_level_instructions/         # JSON files with step instructions for each task
    │   ├── AudioRecorderRecordAudio.json
    │   ├── BrowserDraw.json
    │   ├── SimpleCalendarAddOneEvent.json
    │   └── ... (98 more task instruction files)
    └── raw_data/                       # Raw data for each online task
        ├── AudioRecorderRecordAudio/
        ├── BrowserDraw/
        ├── SimpleCalendarAddOneEvent/
        └── ... (98 more task data directories)
```
### Comparison with Existing Datasets

LearnGUI offers several advantages over existing GUI datasets:

| Dataset                   | # Inst.         | # Apps       | # Step         | Env. | HL | LL | GT | FS |
| ------------------------- | --------------- | ------------ | -------------- | ---- | -- | -- | -- | -- |
| PixelHelp                 | 187             | 4            | 4.2            | ✗   | ✓ | ✗ | ✓ | ✗ |
| MoTIF                     | 276             | 125          | 4.5            | ✗   | ✓ | ✓ | ✓ | ✗ |
| UIBert                    | 16,660          | -            | 1              | ✗   | ✗ | ✓ | ✓ | ✗ |
| UGIF                      | 523             | 12           | 6.3            | ✗   | ✓ | ✓ | ✓ | ✗ |
| AITW                      | 30,378          | 357          | 6.5            | ✗   | ✓ | ✗ | ✓ | ✗ |
| AITZ                      | 2,504           | 70           | 7.5            | ✗   | ✓ | ✓ | ✓ | ✗ |
| AndroidControl            | 15,283          | 833          | 4.8            | ✗   | ✓ | ✓ | ✓ | ✗ |
| AMEX                      | 2,946           | 110          | 12.8           | ✗   | ✓ | ✗ | ✓ | ✗ |
| MobileAgentBench          | 100             | 10           | -              | ✗   | ✓ | ✗ | ✗ | ✗ |
| AppAgent                  | 50              | 10           | -              | ✗   | ✓ | ✗ | ✗ | ✗ |
| LlamaTouch                | 496             | 57           | 7.01           | ✓   | ✓ | ✗ | ✓ | ✗ |
| AndroidWorld              | 116             | 20           | -              | ✓   | ✓ | ✗ | ✗ | ✗ |
| AndroidLab                | 138             | 9            | 8.5            | ✓   | ✓ | ✗ | ✗ | ✗ |
| **LearnGUI (Ours)** | **2,353** | **73** | **13.2** | ✓   | ✓ | ✓ | ✓ | ✓ |

*Note: # Inst. (number of instructions), # Apps (number of applications), # Step (average steps per task), Env. (supports environment interactions), HL (has high-level instructions), LL (has low-level instructions), GT (provides ground truth trajectories), FS (supports few-shot learning).*

## 📄 License

This dataset is licensed under Apache License 2.0.