Robotics
Transformers
Safetensors
llava_llama
Zhoues commited on
Commit
51d62ee
·
verified ·
1 Parent(s): 3289356

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -68
README.md CHANGED
@@ -1,91 +1,61 @@
1
  ---
2
- license: cc-by-nc-4.0
3
  library_name: transformers
4
- pipeline_tag: text-generation
5
- tags:
6
- - NVILA
7
- - VLM
8
  ---
9
 
10
- # VILA Model Card
11
 
12
- ## Model details
13
 
14
- **Model type:**
15
- NVILA is a visual language model (VLM) pretrained with interleaved image-text data at scale, enabling multi-image VLM. Visual language models (VLMs) have made significant advances in accuracy in recent years. However, their efficiency has received much less attention. This paper introduces NVILA, a family of open VLMs designed to optimize both efficiency and accuracy. Building on top of VILA, we improve its model architecture by first scaling up the spatial and temporal resolutions, and then compressing visual tokens. This "scale-then-compress" approach enables NVILA to efficiently process high-resolution images and long videos. We also conduct a systematic investigation to enhance the efficiency of NVILA throughout its entire lifecycle, from training and fine-tuning to deployment. NVILA matches or surpasses the accuracy of many leading open and proprietary VLMs across a wide range of image and video benchmarks. At the same time, it reduces training costs by 4.5X, fine-tuning memory usage by 3.4X, pre-filling latency by 1.6-2.2X, and decoding latency by 1.2-2.8X. We will soon make our code and models available to facilitate reproducibility.
 
16
 
17
- **Model date:**
18
- NVILA was trained in Nov 2024.
 
 
19
 
20
- **Paper or resources for more information:**
21
- https://github.com/NVLabs/VILA
22
 
23
- ```
24
- @misc{liu2024nvila,
25
- title={NVILA: Efficient Frontier Visual Language Models},
26
- author={Zhijian Liu and Ligeng Zhu and Baifeng Shi and Zhuoyang Zhang and Yuming Lou and Shang Yang and Haocheng Xi and Shiyi Cao and Yuxian Gu and Dacheng Li and Xiuyu Li and Yunhao Fang and Yukang Chen and Cheng-Yu Hsieh and De-An Huang and An-Chieh Cheng and Vishwesh Nath and Jinyi Hu and Sifei Liu and Ranjay Krishna and Daguang Xu and Xiaolong Wang and Pavlo Molchanov and Jan Kautz and Hongxu Yin and Song Han and Yao Lu},
27
- year={2024},
28
- eprint={2412.04468},
29
- archivePrefix={arXiv},
30
- primaryClass={cs.CV},
31
- url={https://arxiv.org/abs/2412.04468},
32
- }
33
- ```
34
 
35
- ## License
36
- - The code is released under the Apache 2.0 license as found in the [LICENSE](./LICENSE) file.
37
- - The pretrained weights are released under the [CC-BY-NC-SA-4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).
38
- - The service is a research preview intended for non-commercial use only, and is subject to the following licenses and terms:
39
- - [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI
40
- - [Dataset Licenses](https://github.com/Efficient-Large-Model/VILA/blob/main/data_prepare/LICENSE) for each one used during training.
41
 
42
- **Where to send questions or comments about the model:**
43
- https://github.com/NVLabs/VILA/issues
44
 
45
- ## Intended use
46
- **Primary intended uses:**
47
- The primary use of VILA is research on large multimodal models and chatbots.
48
 
49
- **Primary intended users:**
50
- The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
51
 
52
- ## Input:
53
- **Input Type:** Image, Video, Text
54
- **Input Format:** Red, Green, Blue; MP4 ;String
55
- **Input Parameters:** 2D, 3D
56
 
57
- ## Output:
58
- **Output Type:** Text
59
- **Output Format:** String
60
 
61
- **Supported Hardware Microarchitecture Compatibility:**
62
- * Ampere
63
- * Jetson
64
- * Hopper
65
- * Lovelace
66
 
67
- **[Preferred/Supported] Operating System(s):** <br>
68
- Linux
 
 
 
 
69
 
70
- ## Training dataset
71
- See [Dataset Preparation](https://arxiv.org/abs/2412.04468) for more details.
72
 
73
- ** Data Collection Method by dataset
74
- * [Hybrid: Automated, Human]
75
 
76
- ** Labeling Method by dataset
77
- * [Hybrid: Automated, Human]
78
 
79
- ## Inference:
80
- **Engine:** [Tensor(RT), Triton, Or List Other Here]
81
- * PyTorch
82
- * TensorRT-LLM
83
- * TinyChat
84
 
85
- **Test Hardware:**
86
- * A100
87
- * Jetson Orin
88
- * RTX 4090
89
 
90
- ## Ethical Considerations
91
- NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
  library_name: transformers
4
+ pipeline_tag: robotics
5
+ base_model:
6
+ - Efficient-Large-Model/NVILA-8B
 
7
  ---
8
 
9
+ # 🌏 RoboRefer
10
 
 
11
 
12
+ <a href="https://zhoues.github.io/RoboRefer"><img src="https://img.shields.io/badge/%F0%9F%8F%A0%20Project-Homepage-blue" alt="HomePage"></a>
13
+ <a href="https://arxiv.org/abs/2506.04308"><img src="https://img.shields.io/badge/arXiv%20paper-2506.04308-b31b1b.svg?logo=arxiv" alt="arXiv"></a>
14
+ <a href="https://github.com/Zhoues/RoboRefer"><img src="https://img.shields.io/badge/Code-RoboRefer-black?logo=github" alt="Project Homepage"></a>
15
 
16
+
17
+ <a href="https://huggingface.co/datasets/JingkunAn/RefSpatial"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Dataset-RefSpatial%20Dataset-brightgreen" alt="Dataset"></a>
18
+ <a href="https://huggingface.co/datasets/JingkunAn/RefSpatial-Bench"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Benchmark-RefSpatial%20Bench-green" alt="Benchmark"></a>
19
+ <a href="https://huggingface.co/collections/Zhoues/roborefer-and-refspatial-6857c97848fab02271310b89"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Weights-RoboRefer%20Model-yellow" alt="Weights"></a>
20
 
 
 
21
 
22
+ > This is the official checkpoint of our work: **RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics**
23
+
 
 
 
 
 
 
 
 
 
24
 
 
 
 
 
 
 
25
 
 
 
26
 
 
 
 
27
 
28
+ ## Overview
 
29
 
30
+ NVILA-8B-Depth serves as the base model for both RoboRefer-8B-Depth-Align and RoboRefer-8B-SFT. It shares the same parameters as NVILA-8B, with the addition of a depth encoder and a depth projector, both initialized from the image encoder and image projector, respectively.
 
 
 
31
 
32
+ <!-- ## How to use
 
 
33
 
34
+ RoboRefer-2B-SFT has strong spatial understanding capability and achieves SOTA performance across diverse benchmarks. Given an image with instructions, it can not only answer your questions in both qualitative and quantitative ways using its spatial knowledge, but also output precise points for spatial referring to guide robotic control. For more details, please visit our [official repo](https://github.com/Zhoues/RoboRefer).
35
+ -->
 
 
 
36
 
37
+ ## Resources for More Information
38
+ - Paper: https://arxiv.org/abs/2506.04308
39
+ - Code: https://github.com/Zhoues/RoboRefer
40
+ - Dataset: https://huggingface.co/datasets/JingkunAn/RefSpatial
41
+ - Benchmark: https://huggingface.co/datasets/BAAI/RefSpatial-Bench
42
+ - Website: https://zhoues.github.io/RoboRefer/
43
 
 
 
44
 
45
+ ## Date
46
+ This model was created in June 2025.
47
 
 
 
48
 
 
 
 
 
 
49
 
50
+ ## 📝 Citation
51
+ If you find our code or models useful in your work, please cite [our paper](https://arxiv.org/pdf/2505.06111):
 
 
52
 
53
+
54
+ ```
55
+ @article{zhou2025roborefer,
56
+ title={RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics},
57
+ author={Zhou, Enshen and An, Jingkun and Chi, Cheng and Han, Yi and Rong, Shanyu and Zhang, Chi and Wang, Pengwei and Wang, Zhongyuan and Huang, Tiejun and Sheng, Lu and others},
58
+ journal={arXiv preprint arXiv:2506.04308},
59
+ year={2025}
60
+ }
61
+ ```