new version miss file,can't startup the model gpt-oss
[rank0]:[W814 04:38:00.975202414 ProcessGroupNCCL.cpp:1522] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
(APIServer pid=7) Traceback (most recent call last):
(APIServer pid=7) File "/usr/local/bin/vllm", line 10, in
(APIServer pid=7) sys.exit(main())
(APIServer pid=7) ^^^^^^
(APIServer pid=7) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/cli/main.py", line 54, in main
(APIServer pid=7) args.dispatch_function(args)
(APIServer pid=7) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/cli/serve.py", line 50, in cmd
(APIServer pid=7) uvloop.run(run_server(args))
(APIServer pid=7) File "/usr/local/lib/python3.12/dist-packages/uvloop/init.py", line 109, in run
(APIServer pid=7) return __asyncio.run(
(APIServer pid=7) ^^^^^^^^^^^^^^
(APIServer pid=7) File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
(APIServer pid=7) return runner.run(main)
(APIServer pid=7) ^^^^^^^^^^^^^^^^
(APIServer pid=7) File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
(APIServer pid=7) return self._loop.run_until_complete(task)
(APIServer pid=7) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=7) File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
(APIServer pid=7) File "/usr/local/lib/python3.12/dist-packages/uvloop/init.py", line 61, in wrapper
(APIServer pid=7) return await main
(APIServer pid=7) ^^^^^^^^^^
(APIServer pid=7) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1827, in run_server
(APIServer pid=7) await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
(APIServer pid=7) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1855, in run_server_worker
(APIServer pid=7) await init_app_state(engine_client, vllm_config, app.state, args)
(APIServer pid=7) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1657, in init_app_state
(APIServer pid=7) state.openai_serving_responses = OpenAIServingResponses(
(APIServer pid=7) ^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=7) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/serving_responses.py", line 130, in init
(APIServer pid=7) get_stop_tokens_for_assistant_actions())
(APIServer pid=7) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=7) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/harmony_utils.py", line 187, in get_stop_tokens_for_assistant_actions
(APIServer pid=7) return get_encoding().stop_tokens_for_assistant_actions()
(APIServer pid=7) ^^^^^^^^^^^^^^
(APIServer pid=7) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/harmony_utils.py", line 37, in get_encoding
(APIServer pid=7) _harmony_encoding = load_harmony_encoding(
(APIServer pid=7) ^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=7) File "/usr/local/lib/python3.12/dist-packages/openai_harmony/init.py", line 670, in load_harmony_encoding
(APIServer pid=7) inner: _PyHarmonyEncoding = _load_harmony_encoding(name)
(APIServer pid=7) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=7) openai_harmony.HarmonyError: error downloading or loading vocab file: failed to download or load vocab file
Google the issue, you need to download 2 tiktoken encoder files separately if your service is running in an offline environment