Upload folder using huggingface_hub
Browse files- README.md +54 -0
- config.json +1 -1
- configuration_intern_vit.py +1 -1
- configuration_internvl_chat.py +1 -1
- modeling_intern_vit.py +1 -1
README.md
CHANGED
|
@@ -57,6 +57,8 @@ Limitations: Although we have made efforts to ensure the safety of the model dur
|
|
| 57 |
|
| 58 |
We provide an example code to run Mini-InternVL-Chat-2B-V1-5 using `transformers`.
|
| 59 |
|
|
|
|
|
|
|
| 60 |
> Please use transformers==4.37.2 to ensure the model works normally.
|
| 61 |
|
| 62 |
```python
|
|
@@ -330,6 +332,8 @@ If `ImportError` occurs while executing this case, please install the required d
|
|
| 330 |
|
| 331 |
When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
|
| 332 |
|
|
|
|
|
|
|
| 333 |
```python
|
| 334 |
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
|
| 335 |
from lmdeploy.vl import load_image
|
|
@@ -394,6 +398,56 @@ sess = pipe.chat('What is the woman doing?', session=sess, gen_config=gen_config
|
|
| 394 |
print(sess.response.text)
|
| 395 |
```
|
| 396 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 397 |
## License
|
| 398 |
|
| 399 |
This project is released under the MIT license, while InternLM is licensed under the Apache-2.0 license.
|
|
|
|
| 57 |
|
| 58 |
We provide an example code to run Mini-InternVL-Chat-2B-V1-5 using `transformers`.
|
| 59 |
|
| 60 |
+
We also welcome you to experience the InternVL2 series models in our [online demo](https://internvl.opengvlab.com/). Currently, due to the limited GPU resources with public IP addresses, we can only deploy models up to a maximum of 26B. We will expand soon and deploy larger models to the online demo.
|
| 61 |
+
|
| 62 |
> Please use transformers==4.37.2 to ensure the model works normally.
|
| 63 |
|
| 64 |
```python
|
|
|
|
| 332 |
|
| 333 |
When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
|
| 334 |
|
| 335 |
+
> Warning: Due to the scarcity of multi-image conversation data, the performance on multi-image tasks may be unstable, and it may require multiple attempts to achieve satisfactory results.
|
| 336 |
+
|
| 337 |
```python
|
| 338 |
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
|
| 339 |
from lmdeploy.vl import load_image
|
|
|
|
| 398 |
print(sess.response.text)
|
| 399 |
```
|
| 400 |
|
| 401 |
+
#### Service
|
| 402 |
+
|
| 403 |
+
LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
|
| 404 |
+
|
| 405 |
+
```shell
|
| 406 |
+
lmdeploy serve api_server OpenGVLab/Mini-InternVL-Chat-2B-V1-5 --model-name Mini-InternVL-Chat-2B-V1-5 --backend turbomind --server-port 23333
|
| 407 |
+
```
|
| 408 |
+
|
| 409 |
+
To use the OpenAI-style interface, you need to install OpenAI:
|
| 410 |
+
|
| 411 |
+
```shell
|
| 412 |
+
pip install openai
|
| 413 |
+
```
|
| 414 |
+
|
| 415 |
+
Then, use the code below to make the API call:
|
| 416 |
+
|
| 417 |
+
```python
|
| 418 |
+
from openai import OpenAI
|
| 419 |
+
|
| 420 |
+
client = OpenAI(api_key='YOUR_API_KEY', base_url='http://0.0.0.0:23333/v1')
|
| 421 |
+
model_name = client.models.list().data[0].id
|
| 422 |
+
response = client.chat.completions.create(
|
| 423 |
+
model="Mini-InternVL-Chat-2B-V1-5",
|
| 424 |
+
messages=[{
|
| 425 |
+
'role':
|
| 426 |
+
'user',
|
| 427 |
+
'content': [{
|
| 428 |
+
'type': 'text',
|
| 429 |
+
'text': 'describe this image',
|
| 430 |
+
}, {
|
| 431 |
+
'type': 'image_url',
|
| 432 |
+
'image_url': {
|
| 433 |
+
'url':
|
| 434 |
+
'https://modelscope.oss-cn-beijing.aliyuncs.com/resource/tiger.jpeg',
|
| 435 |
+
},
|
| 436 |
+
}],
|
| 437 |
+
}],
|
| 438 |
+
temperature=0.8,
|
| 439 |
+
top_p=0.8)
|
| 440 |
+
print(response)
|
| 441 |
+
```
|
| 442 |
+
|
| 443 |
+
### vLLM
|
| 444 |
+
|
| 445 |
+
TODO
|
| 446 |
+
|
| 447 |
+
### Ollama
|
| 448 |
+
|
| 449 |
+
TODO
|
| 450 |
+
|
| 451 |
## License
|
| 452 |
|
| 453 |
This project is released under the MIT license, while InternLM is licensed under the Apache-2.0 license.
|
config.json
CHANGED
|
@@ -91,7 +91,7 @@
|
|
| 91 |
"tie_word_embeddings": false,
|
| 92 |
"tokenizer_class": null,
|
| 93 |
"top_k": 50,
|
| 94 |
-
"top_p":
|
| 95 |
"torch_dtype": "bfloat16",
|
| 96 |
"torchscript": false,
|
| 97 |
"transformers_version": "4.37.2",
|
|
|
|
| 91 |
"tie_word_embeddings": false,
|
| 92 |
"tokenizer_class": null,
|
| 93 |
"top_k": 50,
|
| 94 |
+
"top_p": 1.0,
|
| 95 |
"torch_dtype": "bfloat16",
|
| 96 |
"torchscript": false,
|
| 97 |
"transformers_version": "4.37.2",
|
configuration_intern_vit.py
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
# --------------------------------------------------------
|
| 2 |
# InternVL
|
| 3 |
-
# Copyright (c)
|
| 4 |
# Licensed under The MIT License [see LICENSE for details]
|
| 5 |
# --------------------------------------------------------
|
| 6 |
import os
|
|
|
|
| 1 |
# --------------------------------------------------------
|
| 2 |
# InternVL
|
| 3 |
+
# Copyright (c) 2024 OpenGVLab
|
| 4 |
# Licensed under The MIT License [see LICENSE for details]
|
| 5 |
# --------------------------------------------------------
|
| 6 |
import os
|
configuration_internvl_chat.py
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
# --------------------------------------------------------
|
| 2 |
# InternVL
|
| 3 |
-
# Copyright (c)
|
| 4 |
# Licensed under The MIT License [see LICENSE for details]
|
| 5 |
# --------------------------------------------------------
|
| 6 |
|
|
|
|
| 1 |
# --------------------------------------------------------
|
| 2 |
# InternVL
|
| 3 |
+
# Copyright (c) 2024 OpenGVLab
|
| 4 |
# Licensed under The MIT License [see LICENSE for details]
|
| 5 |
# --------------------------------------------------------
|
| 6 |
|
modeling_intern_vit.py
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
# --------------------------------------------------------
|
| 2 |
# InternVL
|
| 3 |
-
# Copyright (c)
|
| 4 |
# Licensed under The MIT License [see LICENSE for details]
|
| 5 |
# --------------------------------------------------------
|
| 6 |
from typing import Optional, Tuple, Union
|
|
|
|
| 1 |
# --------------------------------------------------------
|
| 2 |
# InternVL
|
| 3 |
+
# Copyright (c) 2024 OpenGVLab
|
| 4 |
# Licensed under The MIT License [see LICENSE for details]
|
| 5 |
# --------------------------------------------------------
|
| 6 |
from typing import Optional, Tuple, Union
|