Update README.md
Browse files
README.md
CHANGED
@@ -1,91 +1,61 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
library_name: transformers
|
4 |
-
pipeline_tag:
|
5 |
-
|
6 |
-
- NVILA
|
7 |
-
- VLM
|
8 |
---
|
9 |
|
10 |
-
#
|
11 |
|
12 |
-
## Model details
|
13 |
|
14 |
-
|
15 |
-
|
|
|
16 |
|
17 |
-
|
18 |
-
|
|
|
|
|
19 |
|
20 |
-
**Paper or resources for more information:**
|
21 |
-
https://github.com/NVLabs/VILA
|
22 |
|
23 |
-
|
24 |
-
|
25 |
-
title={NVILA: Efficient Frontier Visual Language Models},
|
26 |
-
author={Zhijian Liu and Ligeng Zhu and Baifeng Shi and Zhuoyang Zhang and Yuming Lou and Shang Yang and Haocheng Xi and Shiyi Cao and Yuxian Gu and Dacheng Li and Xiuyu Li and Yunhao Fang and Yukang Chen and Cheng-Yu Hsieh and De-An Huang and An-Chieh Cheng and Vishwesh Nath and Jinyi Hu and Sifei Liu and Ranjay Krishna and Daguang Xu and Xiaolong Wang and Pavlo Molchanov and Jan Kautz and Hongxu Yin and Song Han and Yao Lu},
|
27 |
-
year={2024},
|
28 |
-
eprint={2412.04468},
|
29 |
-
archivePrefix={arXiv},
|
30 |
-
primaryClass={cs.CV},
|
31 |
-
url={https://arxiv.org/abs/2412.04468},
|
32 |
-
}
|
33 |
-
```
|
34 |
|
35 |
-
## License
|
36 |
-
- The code is released under the Apache 2.0 license as found in the [LICENSE](./LICENSE) file.
|
37 |
-
- The pretrained weights are released under the [CC-BY-NC-SA-4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).
|
38 |
-
- The service is a research preview intended for non-commercial use only, and is subject to the following licenses and terms:
|
39 |
-
- [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI
|
40 |
-
- [Dataset Licenses](https://github.com/Efficient-Large-Model/VILA/blob/main/data_prepare/LICENSE) for each one used during training.
|
41 |
|
42 |
-
**Where to send questions or comments about the model:**
|
43 |
-
https://github.com/NVLabs/VILA/issues
|
44 |
|
45 |
-
## Intended use
|
46 |
-
**Primary intended uses:**
|
47 |
-
The primary use of VILA is research on large multimodal models and chatbots.
|
48 |
|
49 |
-
|
50 |
-
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
|
51 |
|
52 |
-
|
53 |
-
**Input Type:** Image, Video, Text
|
54 |
-
**Input Format:** Red, Green, Blue; MP4 ;String
|
55 |
-
**Input Parameters:** 2D, 3D
|
56 |
|
57 |
-
##
|
58 |
-
**Output Type:** Text
|
59 |
-
**Output Format:** String
|
60 |
|
61 |
-
|
62 |
-
|
63 |
-
* Jetson
|
64 |
-
* Hopper
|
65 |
-
* Lovelace
|
66 |
|
67 |
-
|
68 |
-
|
|
|
|
|
|
|
|
|
69 |
|
70 |
-
## Training dataset
|
71 |
-
See [Dataset Preparation](https://arxiv.org/abs/2412.04468) for more details.
|
72 |
|
73 |
-
|
74 |
-
|
75 |
|
76 |
-
** Labeling Method by dataset
|
77 |
-
* [Hybrid: Automated, Human]
|
78 |
|
79 |
-
## Inference:
|
80 |
-
**Engine:** [Tensor(RT), Triton, Or List Other Here]
|
81 |
-
* PyTorch
|
82 |
-
* TensorRT-LLM
|
83 |
-
* TinyChat
|
84 |
|
85 |
-
|
86 |
-
|
87 |
-
* Jetson Orin
|
88 |
-
* RTX 4090
|
89 |
|
90 |
-
|
91 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: apache-2.0
|
3 |
library_name: transformers
|
4 |
+
pipeline_tag: robotics
|
5 |
+
base_model:
|
6 |
+
- Efficient-Large-Model/NVILA-8B
|
|
|
7 |
---
|
8 |
|
9 |
+
# 🌏 RoboRefer
|
10 |
|
|
|
11 |
|
12 |
+
<a href="https://zhoues.github.io/RoboRefer"><img src="https://img.shields.io/badge/%F0%9F%8F%A0%20Project-Homepage-blue" alt="HomePage"></a>
|
13 |
+
<a href="https://arxiv.org/abs/2506.04308"><img src="https://img.shields.io/badge/arXiv%20paper-2506.04308-b31b1b.svg?logo=arxiv" alt="arXiv"></a>
|
14 |
+
<a href="https://github.com/Zhoues/RoboRefer"><img src="https://img.shields.io/badge/Code-RoboRefer-black?logo=github" alt="Project Homepage"></a>
|
15 |
|
16 |
+
|
17 |
+
<a href="https://huggingface.co/datasets/JingkunAn/RefSpatial"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Dataset-RefSpatial%20Dataset-brightgreen" alt="Dataset"></a>
|
18 |
+
<a href="https://huggingface.co/datasets/JingkunAn/RefSpatial-Bench"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Benchmark-RefSpatial%20Bench-green" alt="Benchmark"></a>
|
19 |
+
<a href="https://huggingface.co/collections/Zhoues/roborefer-and-refspatial-6857c97848fab02271310b89"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Weights-RoboRefer%20Model-yellow" alt="Weights"></a>
|
20 |
|
|
|
|
|
21 |
|
22 |
+
> This is the official checkpoint of our work: **RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics**
|
23 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
|
|
|
|
|
26 |
|
|
|
|
|
|
|
27 |
|
28 |
+
## Overview
|
|
|
29 |
|
30 |
+
NVILA-8B-Depth serves as the base model for both RoboRefer-8B-Depth-Align and RoboRefer-8B-SFT. It shares the same parameters as NVILA-8B, with the addition of a depth encoder and a depth projector, both initialized from the image encoder and image projector, respectively.
|
|
|
|
|
|
|
31 |
|
32 |
+
<!-- ## How to use
|
|
|
|
|
33 |
|
34 |
+
RoboRefer-2B-SFT has strong spatial understanding capability and achieves SOTA performance across diverse benchmarks. Given an image with instructions, it can not only answer your questions in both qualitative and quantitative ways using its spatial knowledge, but also output precise points for spatial referring to guide robotic control. For more details, please visit our [official repo](https://github.com/Zhoues/RoboRefer).
|
35 |
+
-->
|
|
|
|
|
|
|
36 |
|
37 |
+
## Resources for More Information
|
38 |
+
- Paper: https://arxiv.org/abs/2506.04308
|
39 |
+
- Code: https://github.com/Zhoues/RoboRefer
|
40 |
+
- Dataset: https://huggingface.co/datasets/JingkunAn/RefSpatial
|
41 |
+
- Benchmark: https://huggingface.co/datasets/BAAI/RefSpatial-Bench
|
42 |
+
- Website: https://zhoues.github.io/RoboRefer/
|
43 |
|
|
|
|
|
44 |
|
45 |
+
## Date
|
46 |
+
This model was created in June 2025.
|
47 |
|
|
|
|
|
48 |
|
|
|
|
|
|
|
|
|
|
|
49 |
|
50 |
+
## 📝 Citation
|
51 |
+
If you find our code or models useful in your work, please cite [our paper](https://arxiv.org/pdf/2505.06111):
|
|
|
|
|
52 |
|
53 |
+
|
54 |
+
```
|
55 |
+
@article{zhou2025roborefer,
|
56 |
+
title={RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics},
|
57 |
+
author={Zhou, Enshen and An, Jingkun and Chi, Cheng and Han, Yi and Rong, Shanyu and Zhang, Chi and Wang, Pengwei and Wang, Zhongyuan and Huang, Tiejun and Sheng, Lu and others},
|
58 |
+
journal={arXiv preprint arXiv:2506.04308},
|
59 |
+
year={2025}
|
60 |
+
}
|
61 |
+
```
|