Update README.md
Browse files
README.md
CHANGED
@@ -8,8 +8,8 @@ tags:
|
|
8 |
---
|
9 |
|
10 |
This model provides [HuggingFaceTB/SmolVLM-256M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-256M-Instruct) model in TFLite format.
|
11 |
-
You can use this model with [
|
12 |
-
|
13 |
Please note that, at the moment, [AI Edge Torch](https://github.com/google-ai-edge/ai-edge-torch/tree/main/ai_edge_torch/generative/examples) VLMS not supported
|
14 |
on [MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference),
|
15 |
for example [qwen_vl model](https://github.com/google-ai-edge/ai-edge-torch/tree/main/ai_edge_torch/generative/examples/qwen_vl),
|
|
|
8 |
---
|
9 |
|
10 |
This model provides [HuggingFaceTB/SmolVLM-256M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-256M-Instruct) model in TFLite format.
|
11 |
+
You can use this model with [Custom Cpp Pipiline](https://github.com/dragynir/ai-edge-torch-smalvlm/tree/dev/ai_edge_torch/generative/examples/cpp_image)
|
12 |
+
or run with python pipeline (see COLAB example below).
|
13 |
Please note that, at the moment, [AI Edge Torch](https://github.com/google-ai-edge/ai-edge-torch/tree/main/ai_edge_torch/generative/examples) VLMS not supported
|
14 |
on [MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference),
|
15 |
for example [qwen_vl model](https://github.com/google-ai-edge/ai-edge-torch/tree/main/ai_edge_torch/generative/examples/qwen_vl),
|