harpreetsahota commited on
Commit
2b83432
·
verified ·
1 Parent(s): bf02f4d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +94 -85
README.md CHANGED
@@ -9,6 +9,9 @@ task_ids: []
9
  pretty_name: guiact_smartphone_test
10
  tags:
11
  - fiftyone
 
 
 
12
  - image
13
  - object-detection
14
  dataset_summary: '
@@ -46,7 +49,7 @@ dataset_summary: '
46
 
47
  # Note: other available arguments include ''max_samples'', etc
48
 
49
- dataset = load_from_hub("harpreetsahota/guiact_smartphone_test")
50
 
51
 
52
  # Launch the App
@@ -61,7 +64,7 @@ dataset_summary: '
61
  # Dataset Card for guiact_smartphone_test
62
 
63
  <!-- Provide a quick summary of the dataset. -->
64
-
65
 
66
 
67
 
@@ -84,141 +87,147 @@ from fiftyone.utils.huggingface import load_from_hub
84
 
85
  # Load the dataset
86
  # Note: other available arguments include 'max_samples', etc
87
- dataset = load_from_hub("harpreetsahota/guiact_smartphone_test")
88
 
89
  # Launch the App
90
  session = fo.launch_app(dataset)
91
  ```
92
 
93
 
 
 
94
  ## Dataset Details
95
 
96
  ### Dataset Description
97
 
98
- <!-- Provide a longer summary of what this dataset is. -->
99
-
100
 
101
-
102
- - **Curated by:** [More Information Needed]
103
- - **Funded by [optional]:** [More Information Needed]
104
- - **Shared by [optional]:** [More Information Needed]
105
  - **Language(s) (NLP):** en
106
- - **License:** [More Information Needed]
107
-
108
- ### Dataset Sources [optional]
109
 
110
- <!-- Provide the basic links for the dataset. -->
111
 
112
- - **Repository:** [More Information Needed]
113
- - **Paper [optional]:** [More Information Needed]
114
- - **Demo [optional]:** [More Information Needed]
115
 
116
  ## Uses
117
 
118
- <!-- Address questions around how the dataset is intended to be used. -->
119
-
120
  ### Direct Use
121
 
122
- <!-- This section describes suitable use cases for the dataset. -->
123
 
124
- [More Information Needed]
 
 
 
125
 
126
  ### Out-of-Scope Use
127
 
128
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
129
-
130
- [More Information Needed]
 
 
 
131
 
132
  ## Dataset Structure
133
 
134
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
135
 
136
- [More Information Needed]
 
 
137
 
138
- ## Dataset Creation
 
 
 
 
 
139
 
140
- ### Curation Rationale
141
 
142
- <!-- Motivation for the creation of this dataset. -->
143
 
144
- [More Information Needed]
145
 
146
- ### Source Data
147
 
148
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
149
 
150
- #### Data Collection and Processing
151
-
152
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
153
-
154
- [More Information Needed]
155
-
156
- #### Who are the source data producers?
157
-
158
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
159
 
160
- [More Information Needed]
161
-
162
- ### Annotations [optional]
163
-
164
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
165
 
166
- #### Annotation process
167
 
168
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
169
 
170
- [More Information Needed]
171
 
172
- #### Who are the annotators?
173
 
174
- <!-- This section describes the people or systems who created the annotations. -->
175
 
176
- [More Information Needed]
 
 
 
 
 
 
 
177
 
178
  #### Personal and Sensitive Information
179
 
180
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
181
-
182
- [More Information Needed]
183
 
184
  ## Bias, Risks, and Limitations
185
 
186
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
187
-
188
- [More Information Needed]
 
 
189
 
190
  ### Recommendations
191
 
192
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
193
-
194
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
 
 
 
195
 
196
- ## Citation [optional]
197
-
198
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
199
 
200
  **BibTeX:**
201
-
202
- [More Information Needed]
 
 
 
 
 
 
203
 
204
  **APA:**
205
-
206
- [More Information Needed]
207
-
208
- ## Glossary [optional]
209
-
210
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
211
-
212
- [More Information Needed]
213
-
214
- ## More Information [optional]
215
-
216
- [More Information Needed]
217
-
218
- ## Dataset Card Authors [optional]
219
-
220
- [More Information Needed]
221
-
222
- ## Dataset Card Contact
223
-
224
- [More Information Needed]
 
9
  pretty_name: guiact_smartphone_test
10
  tags:
11
  - fiftyone
12
+ - visual-agents
13
+ - os-agents
14
+ - gui-grounding
15
  - image
16
  - object-detection
17
  dataset_summary: '
 
49
 
50
  # Note: other available arguments include ''max_samples'', etc
51
 
52
+ dataset = load_from_hub("Voxel51/guiact_smartphone_test")
53
 
54
 
55
  # Launch the App
 
64
  # Dataset Card for guiact_smartphone_test
65
 
66
  <!-- Provide a quick summary of the dataset. -->
67
+ ![image/png](guiact_smartphone.gif)
68
 
69
 
70
 
 
87
 
88
  # Load the dataset
89
  # Note: other available arguments include 'max_samples', etc
90
+ dataset = load_from_hub("Voxel51/guiact_smartphone_test")
91
 
92
  # Launch the App
93
  session = fo.launch_app(dataset)
94
  ```
95
 
96
 
97
+ # GUIAct Smartphone Dataset
98
+
99
  ## Dataset Details
100
 
101
  ### Dataset Description
102
 
103
+ The GUIAct Smartphone dataset is a collection of multi-step action instructions for smartphone GUI navigation tasks. It was created by converting and adapting a subset of the AITW ([Androids in the Wild](https://arxiv.org/abs/2307.10088)) dataset, specifically focusing on data with the "General" tag. The dataset contains sequences of actions (such as tap, swipe, input) paired with smartphone screenshots, designed to train vision language models to navigate smartphone interfaces. Each instruction requires multiple actions to complete the task, with actions mapped to a standardized format that includes position information. The dataset serves as a crucial component of the larger GUICourse collection for training versatile GUI agents.
 
104
 
105
+ - **Curated by:** Wentong Chen, Junbo Cui, Jinyi Hu, and other researchers from Tsinghua University, Renmin University of China, and other institutions as listed in the GUICourse paper
106
+ - **Shared by:** The authors of the GUICourse paper
 
 
107
  - **Language(s) (NLP):** en
108
+ - **License:** CC BY 4.0
 
 
109
 
110
+ ### Dataset Sources
111
 
112
+ - **Repository:** https://github.com/yiye3/GUICourse and https://huggingface.co/datasets/yiye2023/GUIAct
113
+ - **Paper:** "GUICourse: From General Vision Language Model to Versatile GUI Agent" (arXiv:2406.11317v1)
 
114
 
115
  ## Uses
116
 
 
 
117
  ### Direct Use
118
 
119
+ The GUIAct Smartphone dataset is intended to be used for training and evaluating GUI agents that can navigate smartphone interfaces using visual inputs. Specific use cases include:
120
 
121
+ 1. Training vision language models to understand smartphone interface elements and their functions
122
+ 2. Teaching models to execute multi-step operations on smartphones based on natural language instructions
123
+ 3. Developing assistive technologies that can help users navigate smartphone interfaces
124
+ 4. Benchmarking the performance of GUI agents on smartphone navigation tasks
125
 
126
  ### Out-of-Scope Use
127
 
128
+ The dataset is not intended to:
129
+ 1. Train models to access private user data on smartphones
130
+ 2. Enable unauthorized access to smartphone systems
131
+ 3. Replace human decision-making for critical smartphone operations
132
+ 4. Train agents to perform tasks beyond the scope of normal smartphone interface interactions
133
+ 5. Work with smartphone operating systems or applications not represented in the dataset
134
 
135
  ## Dataset Structure
136
 
137
+ The GUIAct Smartphone dataset contains 9,157 multi-step action instructions which correspond to approximately 67,000 training samples. Each sample consists of:
138
 
139
+ 1. A smartphone screenshot (image)
140
+ 2. One or more actions to be performed on that screenshot
141
+ 3. Natural language instruction describing the task to be accomplished
142
 
143
+ The action space includes standardized actions such as:
144
+ - **tap**: Single-point touch on the screen with position coordinates
145
+ - **swipe**: Multi-point gesture with start and end coordinates
146
+ - **input**: Text entry with content specified
147
+ - **enter**: Submission action
148
+ - **answer**: Task completion indication with text output
149
 
150
+ Actions include position information represented in either absolute pixel coordinates or relative position format (normalized to a range of 0-1 or 0-1000).
151
 
152
+ ## FiftyOne Dataset Structure
153
 
154
+ # GUIAct Smartphone Test Dataset Structure
155
 
156
+ **Basic Info:** 2,079 mobile UI screenshots with interaction annotations
157
 
158
+ **Core Fields:**
159
+ - `id`: ObjectIdField - Unique identifier
160
+ - `filepath`: StringField - Image path
161
+ - `tags`: ListField(StringField) - Sample categories
162
+ - `metadata`: EmbeddedDocumentField - Image properties (size, dimensions)
163
+ - `episode`: StringField - Unique identifier for interaction sequence
164
+ - `step`: IntField - Sequential position within episode
165
+ - `question`: StringField - Natural language task description
166
+ - `current_action`: StringField - Current action to perform
167
+ - `structured_history`: ListField - Previous actions in sequence
168
+ - `ui_elements`: EmbeddedDocumentField(Detections) containing multiple Detection objects:
169
+ - `label`: Element type (ICON_*, TEXT)
170
+ - `bounding_box`: a list of relative bounding box coordinates in [0, 1] in the following format: `[<top-left-x>, <top-left-y>, <width>, <height>]`
171
+ - `text`: Text content of element if present
172
+ - `action`: EmbeddedDocumentField(Keypoints) containing interaction points:
173
+ - `label`: Action type (tap, swipe)
174
+ - `points`: A list of `(x, y)` keypoints in `[0, 1] x [0, 1]`, `[[x, y]]` for tap or `[[x1, y1], [x2, y2]]` for swipe
175
 
176
+ The dataset captures smartphone screen interactions with detailed UI element annotations and action sequences for mobile interaction research.
 
 
 
 
 
 
 
 
177
 
178
+ ## Dataset Creation
 
 
 
 
179
 
180
+ ### Curation Rationale
181
 
182
+ The GUIAct Smartphone dataset was created to provide training data for visual-based GUI agents that can navigate smartphone interfaces. The authors identified limitations in existing datasets for training such agents, including scenarios that were too simplistic, too narrow in domain focus, or too small in size. By adapting data from the larger AITW dataset, they aimed to provide a substantial collection of realistic smartphone navigation tasks for training robust GUI agents.
183
 
184
+ ### Source Data
185
 
186
+ #### Data Collection and Processing
187
 
188
+ The GUIAct Smartphone dataset was derived from the AITW dataset by:
189
 
190
+ 1. Selecting data with the "General" tag from AITW
191
+ 2. Filtering out screenshots that did not include the bottom navigation bar
192
+ 3. Converting the original AITW action format to the unified action space defined for GUICourse:
193
+ - "dual-point gesture" actions were split into "tap" and "swipe" based on distance
194
+ - "type" actions were renamed to "input" actions
195
+ - "enter" actions were preserved
196
+ - "go back/home" actions were converted to "tap" actions on the bottom navigation bar
197
+ - "task complete/impossible" actions were converted to "answer" actions
198
 
199
  #### Personal and Sensitive Information
200
 
201
+ The paper does not explicitly address whether the smartphone screenshots contain personal or sensitive information. However, since the dataset is derived from AITW and intended for research purposes, it likely avoids including personally identifiable information in the screenshots.
 
 
202
 
203
  ## Bias, Risks, and Limitations
204
 
205
+ - The dataset may reflect biases present in smartphone interface design
206
+ - The tasks represented may not cover the full diversity of smartphone usage patterns across different user demographics
207
+ - The dataset is likely limited to specific smartphone operating systems and applications
208
+ - Performance of models trained on this data may not generalize to significantly different smartphone interfaces or to future interface designs
209
+ - The action space simplification (e.g., converting dual-point gestures to tap/swipe) may not capture the full complexity of human smartphone interaction
210
 
211
  ### Recommendations
212
 
213
+ Users should be aware that:
214
+ - Models trained on this dataset may perform differently across different smartphone operating systems or UI designs
215
+ - The simplified action space may limit the range of interactions possible with trained models
216
+ - Evaluation should consider both action accuracy and overall task completion
217
+ - For deployment in assistive technologies, additional safety measures and user feedback mechanisms are advisable
218
+ - The dataset should be used as part of a broader training approach that includes ethical considerations for AI assistance
219
 
220
+ ## Citation
 
 
221
 
222
  **BibTeX:**
223
+ ```bibtex
224
+ @article{chen2024guicourse,
225
+ title={GUICourse: From General Vision Language Model to Versatile GUI Agent},
226
+ author={Chen, Wentong and Cui, Junbo and Hu, Jinyi and Qin, Yujia and Fang, Junjie and Zhao, Yue and Wang, Chongyi and Liu, Jun and Chen, Guirong and Huo, Yupeng and Yao, Yuan and Lin, Yankai and Liu, Zhiyuan and Sun, Maosong},
227
+ journal={arXiv preprint arXiv:2406.11317},
228
+ year={2024}
229
+ }
230
+ ```
231
 
232
  **APA:**
233
+ Chen, W., Cui, J., Hu, J., Qin, Y., Fang, J., Zhao, Y., Wang, C., Liu, J., Chen, G., Huo, Y., Yao, Y., Lin, Y., Liu, Z., & Sun, M. (2024). GUICourse: From General Vision Language Model to Versatile GUI Agent. arXiv preprint arXiv:2406.11317.