-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Usage]: serving 'LLaVA-Next-Video-7B-Qwen2' #11731
Comments
This model isn't in HF format. You can try converting the weights in a similar way as #7984 (comment) |
Then, what is the format of this model? Okay, I will try to convert it first, thanks |
It is likely based on original LLaVA format: https://github.com/haotian-liu/LLaVA |
Hello, I have converted the model's weights (converted model). It can be served via vllm, however it can't process the videos: Server output:
|
Make sure that you're using llava-next-video config instead of llava config. |
Do you mean when converting the weights I should use |
If they have a corresponding script, sure. |
Unfortunately, they don't provide a script for are there other ways to use llava-next-video config instead of llava config? |
conda env python=3.12
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
absl-py 2.1.0 pypi_0 pypi
accelerate 1.0.1 pypi_0 pypi
aiohappyeyeballs 2.4.3 pypi_0 pypi
aiohttp 3.10.10 pypi_0 pypi
aiohttp-cors 0.7.0 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
airportsdata 20241001 pypi_0 pypi
annotated-types 0.7.0 pypi_0 pypi
anyio 4.6.2.post1 pypi_0 pypi
argcomplete 3.5.1 pypi_0 pypi
astor 0.8.1 pypi_0 pypi
attrs 24.2.0 pypi_0 pypi
audioread 3.0.1 pypi_0 pypi
awscli 1.35.23 pypi_0 pypi
bitsandbytes 0.45.0 pypi_0 pypi
black 24.10.0 pypi_0 pypi
blake3 1.0.0 pypi_0 pypi
boto3 1.35.57 pypi_0 pypi
botocore 1.35.57 pypi_0 pypi
buildkite-test-collector 0.1.9 pypi_0 pypi
bzip2 1.0.8 h5eee18b_6
c-ares 1.19.1 h5eee18b_0
ca-certificates 2024.11.26 h06a4308_0
cachetools 5.5.0 pypi_0 pypi
certifi 2024.8.30 pypi_0 pypi
cffi 1.17.1 pypi_0 pypi
chardet 5.2.0 pypi_0 pypi
charset-normalizer 3.4.0 pypi_0 pypi
clang-format 18.1.5 pypi_0 pypi
click 8.1.7 pypi_0 pypi
cloudpickle 3.1.0 pypi_0 pypi
cmake 3.31.2 pypi_0 pypi
codespell 2.3.0 pypi_0 pypi
colorama 0.4.6 pypi_0 pypi
colorful 0.5.6 pypi_0 pypi
compressed-tensors 0.8.1 pypi_0 pypi
contourpy 1.3.0 pypi_0 pypi
cupy-cuda12x 13.3.0 pypi_0 pypi
cycler 0.12.1 pypi_0 pypi
datamodel-code-generator 0.26.3 pypi_0 pypi
dataproperty 1.0.1 pypi_0 pypi
datasets 3.0.2 pypi_0 pypi
decorator 5.1.1 pypi_0 pypi
decord 0.6.0 pypi_0 pypi
depyf 0.18.0 pypi_0 pypi
dill 0.3.8 pypi_0 pypi
diskcache 5.6.3 pypi_0 pypi
distlib 0.3.9 pypi_0 pypi
distro 1.9.0 pypi_0 pypi
dnspython 2.7.0 pypi_0 pypi
docutils 0.16 pypi_0 pypi
einops 0.8.0 pypi_0 pypi
email-validator 2.2.0 pypi_0 pypi
evaluate 0.4.3 pypi_0 pypi
expat 2.6.4 h6a678d5_0
fastapi 0.115.6 pypi_0 pypi
fastrlock 0.8.2 pypi_0 pypi
filelock 3.16.1 pypi_0 pypi
fonttools 4.54.1 pypi_0 pypi
frozenlist 1.5.0 pypi_0 pypi
fsspec 2024.9.0 pypi_0 pypi
genson 1.3.0 pypi_0 pypi
gguf 0.10.0 pypi_0 pypi
google-api-core 2.24.0 pypi_0 pypi
google-auth 2.37.0 pypi_0 pypi
googleapis-common-protos 1.66.0 pypi_0 pypi
grpcio 1.68.1 pypi_0 pypi
h11 0.14.0 pypi_0 pypi
hiredis 3.0.0 pypi_0 pypi
httpcore 1.0.6 pypi_0 pypi
httptools 0.6.4 pypi_0 pypi
httpx 0.27.2 pypi_0 pypi
huggingface-cli 0.1 pypi_0 pypi
huggingface-hub 0.26.2 pypi_0 pypi
idna 3.10 pypi_0 pypi
importlib-metadata 8.5.0 pypi_0 pypi
inflect 5.6.2 pypi_0 pypi
iniconfig 2.0.0 pypi_0 pypi
interegular 0.3.3 pypi_0 pypi
isort 5.13.2 pypi_0 pypi
jinja2 3.1.4 pypi_0 pypi
jiter 0.8.2 pypi_0 pypi
jmespath 1.0.1 pypi_0 pypi
joblib 1.4.2 pypi_0 pypi
jsonlines 4.0.0 pypi_0 pypi
jsonschema 4.23.0 pypi_0 pypi
jsonschema-specifications 2024.10.1 pypi_0 pypi
kiwisolver 1.4.7 pypi_0 pypi
krb5 1.20.1 h143b758_1
lark 1.2.2 pypi_0 pypi
lazy-loader 0.4 pypi_0 pypi
ld_impl_linux-64 2.40 h12ee557_0
libcurl 8.11.1 hc9e6f67_0
libedit 3.1.20230828 h5eee18b_0
libev 4.33 h7f8727e_1
libffi 3.4.4 h6a678d5_1
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libnacl 2.1.0 pypi_0 pypi
libnghttp2 1.57.0 h2d74bed_0
librosa 0.10.2.post1 pypi_0 pypi
libssh2 1.11.1 h251f7ec_0
libstdcxx-ng 11.2.0 h1234567_1
libuuid 1.41.5 h5eee18b_0
libuv 1.48.0 h5eee18b_0
linkify-it-py 2.0.3 pypi_0 pypi
llvmlite 0.43.0 pypi_0 pypi
lm-eval 0.4.4 pypi_0 pypi
lm-format-enforcer 0.10.9 pypi_0 pypi
lxml 5.3.0 pypi_0 pypi
lz4-c 1.9.4 h6a678d5_1
markdown-it-py 3.0.0 pypi_0 pypi
markupsafe 3.0.2 pypi_0 pypi
matplotlib 3.9.2 pypi_0 pypi
mbstrdecoder 1.1.3 pypi_0 pypi
mdit-py-plugins 0.4.2 pypi_0 pypi
mdurl 0.1.2 pypi_0 pypi
memray 1.15.0 pypi_0 pypi
mistral-common 1.5.1 pypi_0 pypi
more-itertools 10.5.0 pypi_0 pypi
mpmath 1.3.0 pypi_0 pypi
msgpack 1.1.0 pypi_0 pypi
msgspec 0.19.0 pypi_0 pypi
multidict 6.1.0 pypi_0 pypi
multiprocess 0.70.16 pypi_0 pypi
mypy 1.11.1 pypi_0 pypi
mypy-extensions 1.0.0 pypi_0 pypi
ncurses 6.4 h6a678d5_0
nest-asyncio 1.6.0 pypi_0 pypi
networkx 3.2.1 pypi_0 pypi
ninja 1.11.1.3 pypi_0 pypi
ninja-base 1.12.1 hdb19cb5_0
nltk 3.9.1 pypi_0 pypi
numba 0.60.0 pypi_0 pypi
numexpr 2.10.1 pypi_0 pypi
numpy 1.26.4 pypi_0 pypi
nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
nvidia-ml-py 12.560.30 pypi_0 pypi
nvidia-nccl-cu12 2.21.5 pypi_0 pypi
nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
openai 1.58.1 pypi_0 pypi
opencensus 0.11.4 pypi_0 pypi
opencensus-context 0.1.3 pypi_0 pypi
opencv-python-headless 4.10.0.84 pypi_0 pypi
openssl 3.0.15 h5eee18b_0
outlines 0.1.11 pypi_0 pypi
outlines-core 0.1.26 pypi_0 pypi
packaging 24.1 pypi_0 pypi
pandas 2.2.3 pypi_0 pypi
partial-json-parser 0.2.1.1.post4 pypi_0 pypi
pathspec 0.12.1 pypi_0 pypi
pathvalidate 3.2.1 pypi_0 pypi
peft 0.13.2 pypi_0 pypi
pillow 10.4.0 pypi_0 pypi
pip 24.3.1 pypi_0 pypi
platformdirs 4.3.6 pypi_0 pypi
pluggy 1.5.0 pypi_0 pypi
polib 1.2.0 pypi_0 pypi
pooch 1.8.2 pypi_0 pypi
portalocker 2.10.1 pypi_0 pypi
prometheus-client 0.21.1 pypi_0 pypi
prometheus-fastapi-instrumentator 7.0.0 pypi_0 pypi
propcache 0.2.0 pypi_0 pypi
proto-plus 1.25.0 pypi_0 pypi
protobuf 5.28.3 pypi_0 pypi
psutil 6.1.0 pypi_0 pypi
py 1.11.0 pypi_0 pypi
py-cpuinfo 9.0.0 pypi_0 pypi
py-spy 0.4.0 pypi_0 pypi
pyarrow 18.0.0 pypi_0 pypi
pyasn1 0.6.1 pypi_0 pypi
pyasn1-modules 0.4.1 pypi_0 pypi
pybind11 2.13.6 pypi_0 pypi
pycountry 24.6.1 pypi_0 pypi
pycparser 2.22 pypi_0 pypi
pydantic 2.9.2 pypi_0 pypi
pydantic-core 2.23.4 pypi_0 pypi
pygments 2.18.0 pypi_0 pypi
pyparsing 3.2.0 pypi_0 pypi
pytablewriter 1.2.0 pypi_0 pypi
pytest 8.3.3 pypi_0 pypi
pytest-asyncio 0.24.0 pypi_0 pypi
pytest-forked 1.6.0 pypi_0 pypi
pytest-rerunfailures 14.0 pypi_0 pypi
pytest-shard 0.1.2 pypi_0 pypi
python 3.12.8 h5148396_0
python-dateutil 2.9.0.post0 pypi_0 pypi
python-dotenv 1.0.1 pypi_0 pypi
pytz 2024.2 pypi_0 pypi
pyyaml 6.0.2 pypi_0 pypi
pyzmq 26.2.0 pypi_0 pypi
ray 2.40.0 pypi_0 pypi
readline 8.2 h5eee18b_0
redis 5.2.0 pypi_0 pypi
referencing 0.35.1 pypi_0 pypi
regex 2024.9.11 pypi_0 pypi
requests 2.32.3 pypi_0 pypi
rhash 1.4.3 hdbd6064_0
rich 13.9.4 pypi_0 pypi
rouge-score 0.1.2 pypi_0 pypi
rpds-py 0.20.1 pypi_0 pypi
rsa 4.7.2 pypi_0 pypi
ruff 0.6.5 pypi_0 pypi
s3transfer 0.10.3 pypi_0 pypi
sacrebleu 2.4.3 pypi_0 pypi
safetensors 0.4.5 pypi_0 pypi
scikit-learn 1.5.2 pypi_0 pypi
scipy 1.13.1 pypi_0 pypi
sentence-transformers 3.2.1 pypi_0 pypi
sentencepiece 0.2.0 pypi_0 pypi
setuptools 75.6.0 pypi_0 pypi
setuptools-scm 8.1.0 pypi_0 pypi
six 1.16.0 pypi_0 pypi
smart-open 7.1.0 pypi_0 pypi
sniffio 1.3.1 pypi_0 pypi
soundfile 0.12.1 pypi_0 pypi
soxr 0.5.0.post1 pypi_0 pypi
sphinx-lint 1.0.0 pypi_0 pypi
sqlite 3.45.3 h5eee18b_0
sqlitedict 2.1.0 pypi_0 pypi
starlette 0.41.3 pypi_0 pypi
sympy 1.13.1 pypi_0 pypi
tabledata 1.3.3 pypi_0 pypi
tabulate 0.9.0 pypi_0 pypi
tcolorpy 0.1.6 pypi_0 pypi
tenacity 9.0.0 pypi_0 pypi
tensorizer 2.9.0 pypi_0 pypi
textual 1.0.0 pypi_0 pypi
threadpoolctl 3.5.0 pypi_0 pypi
tiktoken 0.7.0 pypi_0 pypi
timm 1.0.11 pypi_0 pypi
tk 8.6.14 h39e8969_0
tokenizers 0.21.0 pypi_0 pypi
toml 0.10.2 pypi_0 pypi
tomli 2.0.2 pypi_0 pypi
torch 2.5.1 pypi_0 pypi
torchvision 0.20.1 pypi_0 pypi
tqdm 4.66.6 pypi_0 pypi
tqdm-multiprocess 0.0.11 pypi_0 pypi
transformers 4.48.0.dev0 pypi_0 pypi
transformers-stream-generator 0.0.5 pypi_0 pypi
triton 3.1.0 pypi_0 pypi
typepy 1.3.2 pypi_0 pypi
types-pyyaml 6.0.12.20241230 pypi_0 pypi
types-requests 2.31.0.6 pypi_0 pypi
types-setuptools 75.6.0.20241223 pypi_0 pypi
types-urllib3 1.26.25.14 pypi_0 pypi
typing-extensions 4.12.2 pypi_0 pypi
tzdata 2024.2 pypi_0 pypi
uc-micro-py 1.0.3 pypi_0 pypi
urllib3 1.26.20 pypi_0 pypi
uvicorn 0.34.0 pypi_0 pypi
uvloop 0.21.0 pypi_0 pypi
virtualenv 20.28.1 pypi_0 pypi
vllm 0.6.6.post1 pypi_0 pypi
watchfiles 1.0.3 pypi_0 pypi
websockets 14.1 pypi_0 pypi
wheel 0.45.1 pypi_0 pypi
word2number 1.1 pypi_0 pypi
wrapt 1.17.0 pypi_0 pypi
xformers 0.0.28.post3 pypi_0 pypi
xgrammar 0.1.8 pypi_0 pypi
xxhash 3.5.0 pypi_0 pypi
xz 5.4.6 h5eee18b_1
yapf 0.32.0 pypi_0 pypi
yarl 1.17.1 pypi_0 pypi
zipp 3.21.0 pypi_0 pypi
zlib 1.2.13 h5eee18b_1
zstandard 0.23.0 pypi_0 pypi
zstd 1.5.6 hc292b87_0
How would you like to use vllm
Hello, I have an issue with serving this model: llava-hf/LLaVA-Next-Video-7B-Qwen2-hf which I believe the huggingface version of this model lmms-lab/LLaVA-Video-7B-Qwen2.
I run the vllm using this command:
python -m vllm.entrypoints.openai.api_server --model=/mnt/datadisk0/sanya/LLaVA-NeXT-SC/llava-hf/LLaVA-Next-Video-7B-Qwen2-hf --task generate
And here is an error:
`Traceback (most recent call last):
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/transformers/models/auto/configuration_auto.py", line 1059, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/transformers/models/auto/configuration_auto.py", line 761, in getitem
raise KeyError(key)
KeyError: 'llava_next_video2'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in _run_code
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 774, in
uvloop.run(run_server(args))
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/uvloop/init.py", line 109, in run
return __asyncio.run(
^^^^^^^^^^^^^^
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/uvloop/init.py", line 61, in wrapper
return await main
^^^^^^^^^^
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 740, in run_server
async with build_async_engine_client(args) as engine_client:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/contextlib.py", line 210, in aenter
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 118, in build_async_engine_client
async with build_async_engine_client_from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/contextlib.py", line 210, in aenter
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 210, in build_async_engine_client_from_engine_args
engine_config = engine_args.create_engine_config()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/arg_utils.py", line 1044, in create_engine_config
model_config = self.create_model_config()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/arg_utils.py", line 970, in create_model_config
return ModelConfig(
^^^^^^^^^^^^
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/config.py", line 276, in init
hf_config = get_config(self.model, trust_remote_code, revision,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/transformers_utils/config.py", line 239, in get_config
raise e
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/transformers_utils/config.py", line 219, in get_config
config = AutoConfig.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/transformers/models/auto/configuration_auto.py", line 1061, in from_pretrained
raise ValueError(
ValueError: The checkpoint you are trying to load has model type
llava_next_video2
but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date./mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/transformers/utils/hub.py:128: FutureWarning: Using
TRANSFORMERS_CACHE
is deprecated and will be removed in v5 of Transformers. UseHF_HOME
instead.warnings.warn(
ERROR 01-04 16:01:55 engine.py:366] The checkpoint you are trying to load has model type
llava_next_video2
but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.ERROR 01-04 16:01:55 engine.py:366] Traceback (most recent call last):
ERROR 01-04 16:01:55 engine.py:366] File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/transformers/models/auto/configuration_auto.py", line 1059, in from_pretrained
ERROR 01-04 16:01:55 engine.py:366] config_class = CONFIG_MAPPING[config_dict["model_type"]]
ERROR 01-04 16:01:55 engine.py:366] ~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-04 16:01:55 engine.py:366] File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/transformers/models/auto/configuration_auto.py", line 761, in getitem
ERROR 01-04 16:01:55 engine.py:366] raise KeyError(key)
ERROR 01-04 16:01:55 engine.py:366] KeyError: 'llava_next_video2'
ERROR 01-04 16:01:55 engine.py:366]
ERROR 01-04 16:01:55 engine.py:366] During handling of the above exception, another exception occurred:
ERROR 01-04 16:01:55 engine.py:366]
ERROR 01-04 16:01:55 engine.py:366] Traceback (most recent call last):
ERROR 01-04 16:01:55 engine.py:366] File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 357, in run_mp_engine
ERROR 01-04 16:01:55 engine.py:366] engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
ERROR 01-04 16:01:55 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-04 16:01:55 engine.py:366] File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 114, in from_engine_args
ERROR 01-04 16:01:55 engine.py:366] engine_config = engine_args.create_engine_config(usage_context)
ERROR 01-04 16:01:55 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-04 16:01:55 engine.py:366] File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/arg_utils.py", line 1044, in create_engine_config
ERROR 01-04 16:01:55 engine.py:366] model_config = self.create_model_config()
ERROR 01-04 16:01:55 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-04 16:01:55 engine.py:366] File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/arg_utils.py", line 970, in create_model_config
ERROR 01-04 16:01:55 engine.py:366] return ModelConfig(
ERROR 01-04 16:01:55 engine.py:366] ^^^^^^^^^^^^
ERROR 01-04 16:01:55 engine.py:366] File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/config.py", line 276, in init
ERROR 01-04 16:01:55 engine.py:366] hf_config = get_config(self.model, trust_remote_code, revision,
ERROR 01-04 16:01:55 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-04 16:01:55 engine.py:366] File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/transformers_utils/config.py", line 239, in get_config
ERROR 01-04 16:01:55 engine.py:366] raise e
ERROR 01-04 16:01:55 engine.py:366] File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/transformers_utils/config.py", line 219, in get_config
ERROR 01-04 16:01:55 engine.py:366] config = AutoConfig.from_pretrained(
ERROR 01-04 16:01:55 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-04 16:01:55 engine.py:366] File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/transformers/models/auto/configuration_auto.py", line 1061, in from_pretrained
ERROR 01-04 16:01:55 engine.py:366] raise ValueError(
ERROR 01-04 16:01:55 engine.py:366] ValueError: The checkpoint you are trying to load has model type
llava_next_video2
but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.Process SpawnProcess-1:
Traceback (most recent call last):
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/transformers/models/auto/configuration_auto.py", line 1059, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/transformers/models/auto/configuration_auto.py", line 761, in getitem
raise KeyError(key)
KeyError: 'llava_next_video2'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 368, in run_mp_engine
raise e
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 357, in run_mp_engine
engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 114, in from_engine_args
engine_config = engine_args.create_engine_config(usage_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/arg_utils.py", line 1044, in create_engine_config
model_config = self.create_model_config()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/arg_utils.py", line 970, in create_model_config
return ModelConfig(
^^^^^^^^^^^^
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/config.py", line 276, in init
hf_config = get_config(self.model, trust_remote_code, revision,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/transformers_utils/config.py", line 239, in get_config
raise e
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/transformers_utils/config.py", line 219, in get_config
config = AutoConfig.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/datadisk0/miniconda3/envs/vllm/lib/python3.12/site-packages/transformers/models/auto/configuration_auto.py", line 1061, in from_pretrained
raise ValueError(
ValueError: The checkpoint you are trying to load has model type
llava_next_video2
but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.`Are there any workarounds?
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: