Cannot load vocab_file, Encounter TypeError and DecodeError related to the protobuf library when running the example code.

#1
by YiRabbit - opened

Issue Summary:
Cannot load vocab_file using LlamaTokenizer. Encounter TypeError and DecodeError related to the protobuf library when running the example code.

Environment:
Operating System: Ubuntu 20.04
Python Version: 3.9
Library Version: transformers 4.37.2

Steps to Reproduce

  1. Clone the repository: git clone https://github.com/user/repo.git

  2. Install dependencies.

  3. Copy the example code in the README and run the script: python script.py

  4. Encounter the following error:
    "miniconda3/envs/internvl2/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama.py", line 206, in get_spm_processor
    with open(self.vocab_file, "rb") as f:
    TypeError: expected str, bytes or os.PathLike object, not NoneType

  5. To troubleshoot, I modified the code to explicitly load the vocab_file:
    tokenizer = LlamaTokenizer.from_pretrained(path, vocab_file=vocab_file_path, trust_remote_code=True, use_fast=False)

  6. Run the script again: python script.py

Actual Behavior
Encounter the following traceback when running the modified script:
Traceback (most recent call last):
File "script.py", line X, in
tokenizer = LlamaTokenizer.from_pretrained(path, vocab_file=vocab_file_path, trust_remote_code=True, use_fast=False)
File "miniconda3/envs/internvl2/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama.py", line 209, in get_spm_processor
model = model_pb2.ModelProto.FromString(sp_model)
google.protobuf.message.DecodeError: Error parsing message with type 'sentencepiece.ModelProto'

Sign up or log in to comment