echarlaix HF Staff commited on
Commit
b884aea
·
verified ·
1 Parent(s): 1f6c41e

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +43 -0
README.md ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ datasets:
5
+ - HuggingFaceM4/the_cauldron
6
+ - HuggingFaceM4/Docmatix
7
+ - lmms-lab/LLaVA-OneVision-Data
8
+ - lmms-lab/M4-Instruct-Data
9
+ - HuggingFaceFV/finevideo
10
+ - MAmmoTH-VL/MAmmoTH-VL-Instruct-12M
11
+ - lmms-lab/LLaVA-Video-178K
12
+ - orrzohar/Video-STaR
13
+ - Mutonix/Vript
14
+ - TIGER-Lab/VISTA-400K
15
+ - Enxin/MovieChat-1K_train
16
+ - ShareGPT4Video/ShareGPT4Video
17
+ pipeline_tag: image-text-to-text
18
+ tags:
19
+ - video-text-to-text
20
+ - openvino
21
+ - openvino-export
22
+ language:
23
+ - en
24
+ base_model: HuggingFaceTB/SmolVLM2-2.2B-Instruct
25
+ ---
26
+
27
+ This model was converted to OpenVINO from [`HuggingFaceTB/SmolVLM2-2.2B-Instruct`](https://huggingface.co/HuggingFaceTB/SmolVLM2-2.2B-Instruct) using [optimum-intel](https://github.com/huggingface/optimum-intel)
28
+ via the [export](https://huggingface.co/spaces/echarlaix/openvino-export) space.
29
+
30
+ First make sure you have optimum-intel installed:
31
+
32
+ ```bash
33
+ pip install optimum[openvino]
34
+ ```
35
+
36
+ To load your model you can do as follows:
37
+
38
+ ```python
39
+ from optimum.intel import OVModelForVisualCausalLM
40
+
41
+ model_id = "echarlaix/SmolVLM2-2.2B-Instruct-openvino"
42
+ model = OVModelForVisualCausalLM.from_pretrained(model_id)
43
+ ```