SiriusL commited on
Commit
b7cc64a
·
verified ·
1 Parent(s): cabf0fe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -52
README.md CHANGED
@@ -24,14 +24,16 @@ This repository contains the InfiGUI-G1-3B model from the paper **[InfiGUI-G1: A
24
  <a href="https://github.com/InfiXAI/InfiGUI-G1"><img src="https://img.shields.io/badge/GitHub-Repo-181717?style=flat&logo=github&logoColor=white" alt="GitHub Repo"></a>
25
  </p>
26
 
27
- ## Paper Abstract
28
-
29
- The emergence of Multimodal Large Language Models (MLLMs) has propelled the development of autonomous agents that operate on Graphical User Interfaces (GUIs) using pure visual input. A fundamental challenge is robustly grounding natural language instructions. This requires a precise spatial alignment, which accurately locates the coordinates of each element, and, more critically, a correct semantic alignment, which matches the instructions to the functionally appropriate UI element. Although Reinforcement Learning with Verifiable Rewards (RLVR) has proven to be effective at improving spatial alignment for these MLLMs, we find that inefficient exploration bottlenecks semantic alignment, which prevent models from learning difficult semantic associations. To address this exploration problem, we present Adaptive Exploration Policy Optimization (AEPO), a new policy optimization framework. AEPO employs a multi-answer generation strategy to enforce broader exploration, which is then guided by a theoretically grounded Adaptive Exploration Reward (AER) function derived from first principles of efficiency eta=U/C. Our AEPO-trained models, InfiGUI-G1-3B and InfiGUI-G1-7B, establish new state-of-the-art results across multiple challenging GUI grounding benchmarks, achieving significant relative improvements of up to 9.0% against the naive RLVR baseline on benchmarks designed to test generalization and semantic understanding. Resources are available at this https URL .
30
-
31
  ## Model Description
32
 
33
  The model is based on `Qwen2.5-VL-3B-Instruct` and is fine-tuned using our proposed **Adaptive Exploration Policy Optimization (AEPO)** framework. AEPO is a novel reinforcement learning method designed to enhance the model's **semantic alignment** for GUI grounding tasks. It overcomes the exploration bottlenecks of standard RLVR methods by integrating a multi-answer generation strategy with a theoretically-grounded adaptive reward function, enabling more effective and efficient learning for complex GUI interactions.
34
 
 
 
 
 
 
 
35
  ## Quick Start
36
 
37
  ### Installation
@@ -85,7 +87,7 @@ def load_image(img_path: str) -> Image.Image:
85
 
86
 
87
  def visualize_points(original_image: Image.Image, points: list,
88
- new_width: int, new_height: int,\
89
  original_width: int, original_height: int) -> None:
90
  """Draw prediction points on original image and save as output.png."""
91
  output_img = original_image.copy()
@@ -103,8 +105,8 @@ def visualize_points(original_image: Image.Image, points: list,
103
 
104
  # Draw circle
105
  circle_radius = 20
106
- draw.ellipse([original_x - circle_radius, original_y - circle_radius,\
107
- original_x + circle_radius, original_y + circle_radius],\
108
  fill=(255, 0, 0))
109
 
110
  # Draw label
@@ -138,8 +140,7 @@ def main():
138
 
139
  # Prepare model inputs
140
  instruction = "shuffle play the current playlist"
141
- system_prompt = 'You FIRST think about the reasoning process as an internal monologue and then provide the final answer.\
142
- The reasoning process MUST BE enclosed within <think> </think> tags.'
143
  prompt = f'''The screen's resolution is {new_width}x{new_height}.
144
  Locate the UI element(s) for "{instruction}", output the coordinates using JSON format: [{{"point_2d": [x, y]}}, ...]'''
145
 
@@ -180,49 +181,77 @@ To reproduce the results in our paper, please refer to our repo for detailed ins
180
 
181
  ## Results
182
 
183
- Our InfiGUI-G1 models, trained with the AEPO framework, establish new state-of-the-art results among open-source models across a diverse and challenging set of GUI grounding benchmarks.
184
-
185
- ### MMBench-GUI (L2) Results
186
-
187
- On the comprehensive MMBench-GUI benchmark, which evaluates performance across various platforms and instruction complexities, our InfiGUI-G1 models establish new state-of-the-art results for open-source models in their respective size categories.
188
-
189
- <div align="center">
190
- <img src="https://raw.githubusercontent.com/InfiXAI/InfiGUI-G1/main/assets/results_mmbench-gui.png" width="90%" alt="MMBench-GUI Results">
191
- </div>
192
-
193
- ### ScreenSpot-Pro Results
194
-
195
- On the challenging ScreenSpot-Pro benchmark, designed to test semantic understanding on high-resolution professional software, InfiGUI-G1 demonstrates significant improvements, particularly on icon-based grounding tasks. This highlights AEPO's effectiveness in enhancing semantic alignment by associating abstract visual symbols with their functions.
196
-
197
- <div align="center">
198
- <img src="https://raw.githubusercontent.com/InfiXAI/InfiGUI-G1/main/assets/results_screenspot-pro.png" width="90%" alt="ScreenSpot-Pro Results">
199
- </div>
200
-
201
- ### UI-Vision (Element Grounding) Results
202
-
203
- InfiGUI-G1 shows strong generalization capabilities on the UI-Vision benchmark, which is designed to test robustness across a wide variety of unseen desktop applications. Achieving high performance confirms that our AEPO framework fosters a robust understanding rather than overfitting to the training data.
204
-
205
- <div align="center">
206
- <img src="https://raw.githubusercontent.com/InfiXAI/InfiGUI-G1/main/assets/results_ui-vision.png" width="90%" alt="UI-Vision Results">
207
- </div>
208
-
209
- ### UI-I2E-Bench Results
210
-
211
- To further probe semantic reasoning, we evaluated on UI-I2E-Bench, a benchmark featuring a high proportion of implicit instructions that require reasoning beyond direct text matching. Our model's strong performance underscores AEPO's ability to handle complex, indirect commands.
212
-
213
- <div align="center">
214
- <img src="https://raw.githubusercontent.com/InfiXAI/InfiGUI-G1/main/assets/results_i2e-bench.png" width="90%" alt="UI-I2E-Bench Results">
215
- </div>
216
-
217
- ### ScreenSpot-V2 Results
218
-
219
- On the widely-used ScreenSpot-V2 benchmark, which provides comprehensive coverage across mobile, desktop, and web platforms, InfiGUI-G1 consistently outperforms strong baselines, demonstrating the broad applicability and data efficiency of our approach.
220
-
221
- <div align="center">
222
- <img src="https://raw.githubusercontent.com/InfiXAI/InfiGUI-G1/main/assets/results_screenspot-v2.png" width="90%" alt="ScreenSpot-V2 Results">
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
223
  </div>
224
 
225
- ## ⚙️ Evaluation
226
 
227
  This section provides instructions for reproducing the evaluation results reported in our paper.
228
 
@@ -307,7 +336,7 @@ python eval/eval.py \
307
 
308
  Evaluation results, including detailed logs and performance metrics, will be saved to the `./output/{model_name}/{benchmark}/` directory.
309
 
310
- ## 📚 Citation Information
311
 
312
  If you find this work useful, we would be grateful if you consider citing the following papers:
313
 
@@ -341,6 +370,6 @@ If you find this work useful, we would be grateful if you consider citing the fo
341
  }
342
  ```
343
 
344
- ## 🙏 Acknowledgements
345
 
346
  We would like to express our gratitude for the following open-source projects: [VERL](https://github.com/volcengine/verl), [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL) and [vLLM](https://github.com/vllm-project/vllm).
 
24
  <a href="https://github.com/InfiXAI/InfiGUI-G1"><img src="https://img.shields.io/badge/GitHub-Repo-181717?style=flat&logo=github&logoColor=white" alt="GitHub Repo"></a>
25
  </p>
26
 
 
 
 
 
27
  ## Model Description
28
 
29
  The model is based on `Qwen2.5-VL-3B-Instruct` and is fine-tuned using our proposed **Adaptive Exploration Policy Optimization (AEPO)** framework. AEPO is a novel reinforcement learning method designed to enhance the model's **semantic alignment** for GUI grounding tasks. It overcomes the exploration bottlenecks of standard RLVR methods by integrating a multi-answer generation strategy with a theoretically-grounded adaptive reward function, enabling more effective and efficient learning for complex GUI interactions.
30
 
31
+ ## Paper Overview
32
+
33
+ A fundamental challenge for GUI agents is robustly grounding natural language instructions, which requires not only precise **spatial alignment** (locating elements accurately) but also correct **semantic alignment** (identifying the functionally appropriate element). While existing Reinforcement Learning with Verifiable Rewards (RLVR) methods have enhanced spatial precision, they often suffer from inefficient exploration. This "confidence trap" bottlenecks semantic alignment, preventing models from discovering correct actions for difficult semantic associations.
34
+
35
+ To address this critical exploration problem, we introduce **InfiGUI-G1**, a series of models trained with **Adaptive Exploration Policy Optimization (AEPO)**. AEPO overcomes the exploration bottleneck by integrating a **multi-answer generation** strategy to explore a diverse set of candidate actions in a single forward pass. This exploration is guided by a theoretically-grounded **Adaptive Exploration Reward (AER)** function, derived from first principles of efficiency (η=U/C), which provides rich, informative learning signals to dynamically balance exploration and exploitation.
36
+
37
  ## Quick Start
38
 
39
  ### Installation
 
87
 
88
 
89
  def visualize_points(original_image: Image.Image, points: list,
90
+ new_width: int, new_height: int,
91
  original_width: int, original_height: int) -> None:
92
  """Draw prediction points on original image and save as output.png."""
93
  output_img = original_image.copy()
 
105
 
106
  # Draw circle
107
  circle_radius = 20
108
+ draw.ellipse([original_x - circle_radius, original_y - circle_radius,
109
+ original_x + circle_radius, original_y + circle_radius],
110
  fill=(255, 0, 0))
111
 
112
  # Draw label
 
140
 
141
  # Prepare model inputs
142
  instruction = "shuffle play the current playlist"
143
+ system_prompt = 'You FIRST think about the reasoning process as an internal monologue and then provide the final answer.\nThe reasoning process MUST BE enclosed within <think> </think> tags.'
 
144
  prompt = f'''The screen's resolution is {new_width}x{new_height}.
145
  Locate the UI element(s) for "{instruction}", output the coordinates using JSON format: [{{"point_2d": [x, y]}}, ...]'''
146
 
 
181
 
182
  ## Results
183
 
184
+ Our InfiGUI-G1 models, trained with the AEPO framework, establish new state-of-the-art results among open-source models across a diverse and challenging set of GUI grounding benchmarks:
185
+
186
+ <div align="left">
187
+ <table style="width: 100%; max-width: 750px; border-collapse: collapse; border-top: 2px solid #212529; border-bottom: 2px solid #212529; font-family: sans-serif;">
188
+ <thead style="background-color: #f8f9fa;">
189
+ <tr style="border-bottom: 1.5px solid #212529;">
190
+ <th style="padding: 12px 10px; text-align: left; width: 24.9%; font-weight: 600; color: #343a40;">Model</th>
191
+ <th style="padding: 12px 10px; text-align: center; font-weight: 600; color: #343a40;">MMBench-GUI</th>
192
+ <th style="padding: 12px 10px; text-align: center; font-weight: 600; color: #343a40;">ScreenSpot-v2</th>
193
+ <th style="padding: 12px 10px; text-align: center; font-weight: 600; color: #343a40;">UI-Vision</th>
194
+ <th style="padding: 12px 10px; text-align: center; font-weight: 600; color: #343a40;">I2E-Bench</th>
195
+ <th style="padding: 12px 10px; text-align: center; font-weight: 600; color: #343a40;">ScreenSpot-Pro</th>
196
+ </tr>
197
+ </thead>
198
+ <tbody>
199
+ <tr>
200
+ <td style="padding: 10px; text-align: left;">Qwen2.5-VL-3B</td>
201
+ <td style="padding: 10px; text-align: center;">-</td>
202
+ <td style="padding: 10px; text-align: center;">80.9</td>
203
+ <td style="padding: 10px; text-align: center;">-</td>
204
+ <td style="padding: 10px; text-align: center;">41.7</td>
205
+ <td style="padding: 10px; text-align: center;">-</td>
206
+ </tr>
207
+ <tr>
208
+ <td style="padding: 10px; text-align: left;">UI-R1-E-3B</td>
209
+ <td style="padding: 10px; text-align: center;">-</td>
210
+ <td style="padding: 10px; text-align: center;">-</td>
211
+ <td style="padding: 10px; text-align: center;">-</td>
212
+ <td style="padding: 10px; text-align: center;"><u>69.1</u></td>
213
+ <td style="padding: 10px; text-align: center;"><u>33.5</u></td>
214
+ </tr>
215
+ <tr>
216
+ <td style="padding: 10px; text-align: left;">Aguvis-7B</td>
217
+ <td style="padding: 10px; text-align: center;"><u>45.7</u></td>
218
+ <td style="padding: 10px; text-align: center;">-</td>
219
+ <td style="padding: 10px; text-align: center;"><u>13.7</u></td>
220
+ <td style="padding: 10px; text-align: center;">53.2</td>
221
+ <td style="padding: 10px; text-align: center;">-</td>
222
+ </tr>
223
+ <tr>
224
+ <td style="padding: 10px; text-align: left;">OS-Atlas-7B</td>
225
+ <td style="padding: 10px; text-align: center;">41.4</td>
226
+ <td style="padding: 10px; text-align: center;"><u>85.1</u></td>
227
+ <td style="padding: 10px; text-align: center;">9.0</td>
228
+ <td style="padding: 10px; text-align: center;">58.6</td>
229
+ <td style="padding: 10px; text-align: center;">-</td>
230
+ </tr>
231
+ <tr>
232
+ <th colspan="6" style="padding: 10px 12px; text-align: left; font-style: italic; background-color: #f8f9fa; border-top: 1px solid #dee2e6; border-bottom: 1px solid #dee2e6; color: #343a40;">Ours</th>
233
+ </tr>
234
+ <tr style="background-color: #f0f8ff;">
235
+ <td style="padding: 10px; text-align: left;"><b>InfiGUI-G1-3B</b></td>
236
+ <td style="padding: 10px; text-align: center;"><b>73.4</b></td>
237
+ <td style="padding: 10px; text-align: center;"><b>91.1</b></td>
238
+ <td style="padding: 10px; text-align: center;"><b>22.0</b></td>
239
+ <td style="padding: 10px; text-align: center;"><b>72.6</b></td>
240
+ <td style="padding: 10px; text-align: center;"><b>45.2</b></td>
241
+ </tr>
242
+ <tr style="backgroundColor: #f0f8ff;">
243
+ <td style="padding: 10px; text-align: right;"><i>w/ Expl. Success</i></td>
244
+ <td style="padding: 10px; text-align: center;">81.6</td>
245
+ <td style="padding: 10px; text-align: center;">94.4</td>
246
+ <td style="padding: 10px; text-align: center;">29.7</td>
247
+ <td style="padding: 10px; text-align: center;">82.8</td>
248
+ <td style="padding: 10px; text-align: center;">52.0</td>
249
+ </tr>
250
+ </tbody>
251
+ </table>
252
  </div>
253
 
254
+ ## Evaluation
255
 
256
  This section provides instructions for reproducing the evaluation results reported in our paper.
257
 
 
336
 
337
  Evaluation results, including detailed logs and performance metrics, will be saved to the `./output/{model_name}/{benchmark}/` directory.
338
 
339
+ ## Citation Information
340
 
341
  If you find this work useful, we would be grateful if you consider citing the following papers:
342
 
 
370
  }
371
  ```
372
 
373
+ ## Acknowledgements
374
 
375
  We would like to express our gratitude for the following open-source projects: [VERL](https://github.com/volcengine/verl), [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL) and [vLLM](https://github.com/vllm-project/vllm).