config files

#7
by nlev - opened

Is it correct to assume these should just be copied over from 3.1?

  • preprocessor_config.json
  • processor_config.json
  • tokenizer_config.json
  • tokenizer.json

Hopefully this saves others some time:

llm = LLM(
    model='mistralai/Mistral-Small-3.2-24B-Instruct-2506',
    tokenizer_mode="mistral",
    config_format="mistral",
    load_format="mistral",
)

"llm = LLM(
model='mistralai/Mistral-Small-3.2-24B-Instruct-2506',
tokenizer_mode="mistral",
config_format="mistral",
load_format="mistral",
)"

Is this the implementation in vLLM?
So we have to accept that downloading from HuggingFace does not work like this:

tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.save_pretrained(local_path)
model = AutoModelForCausalLM.from_pretrained( model_id)
model.save_pretrained(local_path, safe_serialization=False)

Error: KeyError: <class 'transformers.models.mistral3.configuration_mistral3.Mistral3Config'>

Sign up or log in to comment