Can't load tokenizer for 'katuni4ka/dolly-v2-3b-ov'
Hello, I was trying out this image as I was interested to see what difference OpenVINO made to Dolly 2.0.
I got the following error when trying the example script provided.
Traceback (most recent call last):
File "//script.py", line 6, in
tokenizer = AutoTokenizer.from_pretrained(model_id)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 709, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1809, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'katuni4ka/dolly-v2-3b-ov'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'katuni4ka/dolly-v2-3b-ov' is the correct path to a directory containing all relevant files for a GPTNeoXTokenizerFast tokenizer.
I've tried copying the related tokenizer files from the original Dolly V2 project, but this does not seem to have resolved the issue. Are there perhaps other files missing that we would need to run this model?