Unable to use the model

#2
by ojascereb - opened

model_id = "Qwen/Qwen2.5-Omni-7B"
embed_model_id = "Tevatron/OmniEmbed-v0.1-multivent"

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

processor = AutoProcessor.from_pretrained(model_id)
model = Qwen2_5OmniThinkerForConditionalGeneration.from_pretrained(
embed_model_id,
torch_dtype=torch.bfloat16,
).to(device).eval()

OSError: Tevatron/OmniEmbed-v0.1-multivent does not appear to have a file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt or flax_model.msgpack.

Faced the same thing, using transformers

Tevatron org

Hi @ojascereb @Nasa1423
Try to update transformers?

Or alternatively, you can load model as following:

from transformers import AutoProcessor, Qwen2_5OmniThinkerForConditionalGeneration
from peft import PeftModel, PeftConfig

def get_model(peft_model_name):
    config = PeftConfig.from_pretrained(peft_model_name)
    base_model = Qwen2_5OmniThinkerForConditionalGeneration.from_pretrained(config.base_model_name_or_path)
    model = PeftModel.from_pretrained(base_model, peft_model_name)
    model = model.merge_and_unload()
    model.eval()
    return model

embed_model_id = "Tevatron/OmniEmbed-v0.1-multivent"
model= get_model(embed_model_id).to('cuda:0')
processor = AutoProcessor.from_pretrained( embed_model_id)

Hey @ArvinZhuang ..
Thanks .. It worked ..

Sign up or log in to comment