Update model card with link to most recent paper and full citations

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +20 -16
README.md CHANGED
@@ -1,38 +1,42 @@
1
  ---
2
- license: mit
3
- pipeline_tag: image-text-to-text
4
- library_name: transformers
5
  base_model:
6
- - internlm/internlm2-chat-1_8b
7
- base_model_relation: merge
8
  language:
9
- - multilingual
 
 
 
10
  tags:
11
- - internvl
12
- - vision
13
- - ocr
14
- - custom_code
15
- - moe
 
16
  ---
17
 
18
  # Mono-InternVL-2B-S1-3
19
 
20
  This repository contains the Mono-InternVL-2B model after **S1.1 concept learning**, **S1.2 semantic learning**, and **S1.3 alignment learning**.
21
 
22
- Please refer to our [**paper**](https://huggingface.co/papers/2410.08202), [**project page**](https://internvl.github.io/blog/2024-10-10-Mono-InternVL/) and [**GitHub repository**](https://github.com/OpenGVLab/mono-internvl) for introduction and usage.
23
-
24
-
25
 
26
  ## Citation
27
 
28
  If you find this project useful in your research, please consider citing:
29
 
30
  ```BibTeX
31
- @article{luo2024mono,
32
  title={Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training},
33
  author={Luo, Gen and Yang, Xue and Dou, Wenhan and Wang, Zhaokai and Liu, Jiawen and Dai, Jifeng and Qiao, Yu and Zhu, Xizhou},
34
  journal={arXiv preprint arXiv:2410.08202},
35
  year={2024}
36
  }
37
- ```
38
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
2
  base_model:
3
+ - internlm/internlm2-chat-1_8b
 
4
  language:
5
+ - multilingual
6
+ library_name: transformers
7
+ license: mit
8
+ pipeline_tag: image-text-to-text
9
  tags:
10
+ - internvl
11
+ - vision
12
+ - ocr
13
+ - custom_code
14
+ - moe
15
+ base_model_relation: merge
16
  ---
17
 
18
  # Mono-InternVL-2B-S1-3
19
 
20
  This repository contains the Mono-InternVL-2B model after **S1.1 concept learning**, **S1.2 semantic learning**, and **S1.3 alignment learning**.
21
 
22
+ Please refer to our [**paper**](https://huggingface.co/papers/2507.12566), [**project page**](https://internvl.github.io/blog/2024-10-10-Mono-InternVL/) and [**GitHub repository**](https://github.com/OpenGVLab/mono-internvl) for introduction and usage.
 
 
23
 
24
  ## Citation
25
 
26
  If you find this project useful in your research, please consider citing:
27
 
28
  ```BibTeX
29
+ @article{mono_internvl_v1,
30
  title={Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training},
31
  author={Luo, Gen and Yang, Xue and Dou, Wenhan and Wang, Zhaokai and Liu, Jiawen and Dai, Jifeng and Qiao, Yu and Zhu, Xizhou},
32
  journal={arXiv preprint arXiv:2410.08202},
33
  year={2024}
34
  }
 
35
 
36
+ @article{mono_internvl_v1.5,
37
+ title={Mono-InternVL-1.5: Towards Cheaper and Faster Monolithic Multimodal Large Language Models},
38
+ author={Luo, Gen and Dou, Wenhan and Li, Wenhao and Wang, Zhaokai and Yang, Xue and Tian, Changyao and Li, Hao and Wang, Weiyun and Wang, Wenhai and Zhu, Xizhou and Qiao, Yu and Dai, Jifeng},
39
+ journal={arXiv preprint arXiv:2507.12566},
40
+ year={2025}
41
+ }
42
+ ```