Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
SiriusL commited on
Commit
15231aa
·
verified ·
1 Parent(s): 9578937

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -3
README.md CHANGED
@@ -1,3 +1,42 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ This dataset is a filtered version of the **[omniact](https://huggingface.co/datasets/Writer/omniact)** dataset, containing only grounding tasks. It is intended as an example dataset for the training of our InfiGUI-G1 model. For more details on our work, please see our GitHub repository: **[https://github.com/InfiXAI/InfiGUI-G1](https://github.com/InfiXAI/InfiGUI-G1)**.
6
+
7
+ ## Dataset Details
8
+
9
+ The `omniact_grounding_filtered` dataset is derived from the original `omniact` dataset, with the following modifications:
10
+
11
+ - **Grounding Tasks Only**: We have selected only the grounding task samples.
12
+ - **Hard Samples Only**: We filtered out overly simple samples. Specifically, any sample that was solved correctly in all 8 attempts (i.e., had a 100% success rate) was removed from the dataset.
13
+
14
+ ## Citation Information
15
+
16
+ If you find this dataset or our work useful in your research, we would be grateful if you consider citing our work.
17
+
18
+ ```bibtex
19
+ @article{liu2025infiguig1,
20
+ title={InfiGUI-G1: Advancing GUI Grounding with Adaptive Exploration Policy Optimization},
21
+ author={Liu, Yuhang and Liu, Zeyu and Zhu, Shuanghe and Li, Pengxiang and Xie, Congkai and Wang, Jiasheng and Hu, Xueyu and Han, Xiaotian and Yuan, Jianbo and Wang, Xinyao and others},
22
+ journal={arXiv preprint arXiv:2508.05731},
23
+ year={2025}
24
+ }
25
+ ```
26
+
27
+ We also strongly recommend citing the original data source, as their work was foundational to ours:
28
+
29
+ ```bibtex
30
+ @misc{kapoor2024omniact,
31
+ title={OmniACT: A Dataset and Benchmark for Enabling Multimodal Generalist Autonomous Agents for Desktop and Web},
32
+ author={Raghav Kapoor and Yash Parag Butala and Melisa Russak and Jing Yu Koh and Kiran Kamble and Waseem Alshikh and Ruslan Salakhutdinov},
33
+ year={2024},
34
+ eprint={2402.17553},
35
+ archivePrefix={arXiv},
36
+ primaryClass={cs.AI}
37
+ }
38
+ ```
39
+
40
+ ## Acknowledgements
41
+
42
+ We extend our sincere gratitude to the authors and contributors of the original **omniact** dataset for their significant work and for making their valuable data publicly available. Their efforts have been foundational for research in this area.