Update README.md
Browse files
README.md
CHANGED
@@ -27,7 +27,7 @@ We investigate domain adaptation of MLLMs through post-training, focusing on dat
|
|
27 |
|
28 |
Starting with transformers >= 4.45.0 onward, you can run inference using conversational messages that may include an image you can query about.
|
29 |
|
30 |
-
Make sure to update your transformers installation via pip install --upgrade transformers
|
31 |
|
32 |
```bash
|
33 |
import requests
|
@@ -35,7 +35,7 @@ import torch
|
|
35 |
from PIL import Image
|
36 |
from transformers import MllamaForConditionalGeneration, AutoProcessor
|
37 |
|
38 |
-
model_id = "AdaptLLM/
|
39 |
|
40 |
model = MllamaForConditionalGeneration.from_pretrained(
|
41 |
model_id,
|
@@ -65,6 +65,8 @@ output = model.generate(**inputs, max_new_tokens=30)
|
|
65 |
print(processor.decode(output[0]))
|
66 |
```
|
67 |
|
|
|
|
|
68 |
## Citation
|
69 |
If you find our work helpful, please cite us.
|
70 |
|
|
|
27 |
|
28 |
Starting with transformers >= 4.45.0 onward, you can run inference using conversational messages that may include an image you can query about.
|
29 |
|
30 |
+
Make sure to update your transformers installation via `pip install --upgrade transformers`.
|
31 |
|
32 |
```bash
|
33 |
import requests
|
|
|
35 |
from PIL import Image
|
36 |
from transformers import MllamaForConditionalGeneration, AutoProcessor
|
37 |
|
38 |
+
model_id = "AdaptLLM/food-Llama-3.2-11B-Vision-Instruct"
|
39 |
|
40 |
model = MllamaForConditionalGeneration.from_pretrained(
|
41 |
model_id,
|
|
|
65 |
print(processor.decode(output[0]))
|
66 |
```
|
67 |
|
68 |
+
Since our model architecture aligns with the base model, you can refer to the official repository of [Llama-3.2-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) for more advanced usage instructions.
|
69 |
+
|
70 |
## Citation
|
71 |
If you find our work helpful, please cite us.
|
72 |
|