(vllm-py312) zhouyihong@zhouyihongdeMacBook-Pro-2 / % conda run --live-stream --name vllm-py312 python /Users/zhouyihong/Desktop/测测试.py
INFO 08-04 14:20:25 [init.py:235] Automatically detected platform cpu.
WARNING 08-04 14:20:26 [_custom_ops.py:20] Failed to import from vllm._C with ImportError(“dlopen(/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/site-packages/vllm/_C.abi3.so, 0x0002): symbol not found in flat namespace ‘__Z14int8_scaled_mmRN2at6TensorERKS0_S3_S3_S3_RKNSt3__18optionalIS0_EE’”)
已加载问卷数据,共2019行
问题1:推荐者指数.B4就本次销售提供的服务而言,您最喜欢哪个部分?(至少50字)
问题2:推荐者指数.B3就您的体验而言,是什么原因导致您不会主动推荐此店铺?(至少50字)
问题3:推荐者指数.B2请详细说明销售可以做什么样的改善来让您的顾客体验感受更佳?(至少50字)
将处理第1行至第10行,共10行数据
WARNING 08-04 14:20:29 [config.py:3392] Your device ‘cpu’ doesn’t support torch.bfloat16. Falling back to torch.float16 for compatibility.
WARNING 08-04 14:20:29 [config.py:3443] Casting torch.bfloat16 to torch.float16.
INFO 08-04 14:20:29 [config.py:1604] Using max model len 40960
WARNING 08-04 14:20:29 [cpu.py:113] Environment variable VLLM_CPU_KVCACHE_SPACE (GiB) for CPU backend is not set, using 4 by default.
INFO 08-04 14:20:29 [arg_utils.py:1030] Chunked prefill is not supported for ARM and POWER CPUs; disabling it for V1 backend.
INFO 08-04 14:20:32 [init.py:235] Automatically detected platform cpu.
WARNING 08-04 14:20:33 [_custom_ops.py:20] Failed to import from vllm._C with ImportError(“dlopen(/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/site-packages/vllm/_C.abi3.so, 0x0002): symbol not found in flat namespace ‘__Z14int8_scaled_mmRN2at6TensorERKS0_S3_S3_S3_RKNSt3__18optionalIS0_EE’”)
已加载问卷数据,共2019行
问题1:推荐者指数.B4就本次销售提供的服务而言,您最喜欢哪个部分?(至少50字)
问题2:推荐者指数.B3就您的体验而言,是什么原因导致您不会主动推荐此店铺?(至少50字)
问题3:推荐者指数.B2请详细说明销售可以做什么样的改善来让您的顾客体验感受更佳?(至少50字)
将处理第1行至第10行,共10行数据
WARNING 08-04 14:20:36 [config.py:3392] Your device ‘cpu’ doesn’t support torch.bfloat16. Falling back to torch.float16 for compatibility.
WARNING 08-04 14:20:36 [config.py:3443] Casting torch.bfloat16 to torch.float16.
INFO 08-04 14:20:36 [config.py:1604] Using max model len 40960
WARNING 08-04 14:20:36 [cpu.py:113] Environment variable VLLM_CPU_KVCACHE_SPACE (GiB) for CPU backend is not set, using 4 by default.
INFO 08-04 14:20:36 [arg_utils.py:1030] Chunked prefill is not supported for ARM and POWER CPUs; disabling it for V1 backend.
Traceback (most recent call last):
File “”, line 1, in
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/multiprocessing/spawn.py”, line 122, in spawn_main
exitcode = _main(fd, parent_sentinel)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/multiprocessing/spawn.py”, line 131, in _main
prepare(preparation_data)
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/multiprocessing/spawn.py”, line 246, in prepare
_fixup_main_from_path(data[‘init_main_from_path’])
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/multiprocessing/spawn.py”, line 297, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
^^^^^^^^^^^^^^^^^^^^^^^^^
File “”, line 287, in run_path
File “”, line 98, in _run_module_code
File “”, line 88, in _run_code
File “/Users/zhouyihong/Desktop/测测试.py”, line 90, in
llm = LLM(
^^^^
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/site-packages/vllm/entrypoints/llm.py”, line 273, in init
self.llm_engine = LLMEngine.from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/site-packages/vllm/v1/engine/llm_engine.py”, line 152, in from_engine_args
return cls(vllm_config=vllm_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/site-packages/vllm/v1/engine/llm_engine.py”, line 103, in init
self.engine_core = EngineCoreClient.make_client(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/site-packages/vllm/v1/engine/core_client.py”, line 77, in make_client
return SyncMPClient(vllm_config, executor_class, log_stats)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/site-packages/vllm/v1/engine/core_client.py”, line 514, in init
super().init(
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/site-packages/vllm/v1/engine/core_client.py”, line 408, in init
with launch_core_engines(vllm_config, executor_class,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/contextlib.py”, line 137, in enter
return next(self.gen)
^^^^^^^^^^^^^^
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/site-packages/vllm/v1/engine/utils.py”, line 680, in launch_core_engines
local_engine_manager = CoreEngineProcManager(
^^^^^^^^^^^^^^^^^^^^^^
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/site-packages/vllm/v1/engine/utils.py”, line 133, in init
proc.start()
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/multiprocessing/process.py”, line 121, in start
self._popen = self._Popen(self)
^^^^^^^^^^^^^^^^^
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/multiprocessing/context.py”, line 289, in _Popen
return Popen(process_obj)
^^^^^^^^^^^^^^^^^^
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/multiprocessing/popen_spawn_posix.py”, line 32, in init
super().init(process_obj)
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/multiprocessing/popen_fork.py”, line 19, in init
self._launch(process_obj)
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/multiprocessing/popen_spawn_posix.py”, line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/multiprocessing/spawn.py”, line 164, in get_preparation_data
_check_not_importing_main()
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/multiprocessing/spawn.py”, line 140, in _check_not_importing_main
raise RuntimeError(‘’’
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
To fix this issue, refer to the "Safe importing of main module"
section in https://docs.python.org/3/library/multiprocessing.html
Traceback (most recent call last):
File “/Users/zhouyihong/Desktop/测测试.py”, line 90, in
llm = LLM(
^^^^
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/site-packages/vllm/entrypoints/llm.py”, line 273, in init
self.llm_engine = LLMEngine.from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/site-packages/vllm/engine/llm_engine.py”, line 497, in from_engine_args
return engine_cls.from_vllm_config(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/site-packages/vllm/v1/engine/llm_engine.py”, line 126, in from_vllm_config
return cls(vllm_config=vllm_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/site-packages/vllm/v1/engine/llm_engine.py”, line 103, in init
self.engine_core = EngineCoreClient.make_client(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/site-packages/vllm/v1/engine/core_client.py”, line 77, in make_client
return SyncMPClient(vllm_config, executor_class, log_stats)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/site-packages/vllm/v1/engine/core_client.py”, line 514, in init
super().init(
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/site-packages/vllm/v1/engine/core_client.py”, line 408, in init
with launch_core_engines(vllm_config, executor_class,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/contextlib.py”, line 144, in exit
next(self.gen)
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/site-packages/vllm/v1/engine/utils.py”, line 697, in launch_core_engines
wait_for_engine_startup(
File “/opt/homebrew/Caskroom/miniconda/base/envs/vllm-py312/lib/python3.12/site-packages/vllm/v1/engine/utils.py”, line 750, in wait_for_engine_startup
raise RuntimeError("Engine core initialization failed. "
RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {‘EngineCore_0’: 1}