--- base_model: - internlm/internlm2-chat-1_8b language: - multilingual library_name: transformers license: mit pipeline_tag: image-text-to-text tags: - internvl - vision - ocr - custom_code - moe base_model_relation: merge --- # Mono-InternVL-2B-S1-1 This repository contains the Mono-InternVL-2B model after **S1.1 concept learning**, as part of the work presented in [Mono-InternVL-1.5: Towards Cheaper and Faster Monolithic Multimodal Large Language Models](https://huggingface.co/papers/2507.12566). Please refer to our [**project page**](https://internvl.github.io/blog/2024-10-10-Mono-InternVL/) and [**GitHub repository**](https://github.com/OpenGVLab/mono-internvl) for full introduction, code, and usage instructions. **Mono-InternVL** is a family of monolithic multimodal large language models (MLLMs) that integrates visual encoding and language decoding into a single LLM, aiming for cheaper and faster inference. It addresses challenges of unstable optimization and catastrophic forgetting by embedding a new visual parameter space into a pre-trained LLM, enabling stable learning of visual knowledge via delta tuning. ### ✨ Key Highlights - **Monolithic Architecture**: Integrates visual encoding and language decoding into a single LLM, simplifying the model structure. - **Endogenous Visual Pre-training (EViP++)**: Features an innovative pre-training strategy that maximizes visual capabilities through progressive learning and incorporates additional visual attention experts. - **Efficiency**: Significantly reduces training and inference costs, including a fused CUDA kernel for faster MoE operations, while maintaining competitive performance. ### 📊 Performance Mono-InternVL achieves competitive performance across various multimodal benchmarks, often outperforming other monolithic MLLMs. Compared to its modular counterpart, InternVL-1.5, Mono-InternVL-1.5 achieves similar multimodal performance while reducing first-token latency by up to 69%. Below is a summary of some key benchmarks: | Benchmark | Mono-InternVL-2B | Mini-InternVL-2B-1-5 | Emu3 | | :------------------- | :--------------: | :------------------: | :---: | | Type | Monolithic | Modular | Monolithic | | #Activated Params | 1.8B | 2.2B | 8B | | **MMVet** | 40.1 | 39.3 | 37.2 | | **OCRBench** | 767 | 654 | 687 | | **MathVista** | 45.7 | 41.1 | — | | **TextVQA** | 72.6 | 70.5 | 64.7 | | **DocVQA** | 80.0 | 85.0 | 76.3 | *(For full performance details, please refer to the [paper](https://huggingface.co/papers/2507.12566) and [project page](https://internvl.github.io/blog/2024-10-10-Mono-InternVL/))* ### 🚀 Quick Inference (using Transformers) ```python import torch from PIL import Image from transformers import AutoModel, AutoTokenizer # Load model and tokenizer (ensure transformers==4.37.2) path = 'OpenGVLab/Mono-InternVL-2B' model = AutoModel.from_pretrained( path, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, trust_remote_code=True ).eval().cuda() tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False) # Load image (ensure image is preprocessed if needed as per GitHub instructions) # For simplicity, using a dummy image path here. # Refer to the GitHub repo for `load_image` utility function. # pixel_values = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda() pixel_values = None # Replace with actual image tensor generation_config = dict(max_new_tokens=1024, do_sample=True) # Example: single-image single-round conversation question = ' Please describe the image shortly.' # response = model.chat(tokenizer, pixel_values, question, generation_config) # print(f'User: {question} Assistant: {response}') # Example: pure-text conversation question = 'Hello, who are you?' response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True) print(f'User: {question} Assistant: {response}') ``` ## Citation If you find this project useful in your research, please consider citing the related papers: ```BibTeX @article{mono_internvl_v1, title={Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training}, author={Luo, Gen and Yang, Xue and Dou, Wenhan and Wang, Zhaokai and Liu, Jiawen and Dai, Jifeng and Qiao, Yu and Zhu, Xizhou}, journal={arXiv preprint arXiv:2410.08202}, year={2024} } @article{mono_internvl_v1.5, title={Mono-InternVL-1.5: Towards Cheaper and Faster Monolithic Multimodal Large Language Models}, author={Luo, Gen and Dou, Wenhan and Li, Wenhao and Wang, Zhaokai and Yang, Xue and Tian, Changyao and Li, Hao and Wang, Weiyun and Wang, Wenhai and Zhu, Xizhou and Qiao, Yu and Dai, Jifeng}, journal={arXiv preprint arXiv:2507.12566}, year={2025} } ```