Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

xinference didn't support qwen2-vl-72B? #2730

Open
3 tasks
cqray1990 opened this issue Jan 2, 2025 · 13 comments
Open
3 tasks

xinference didn't support qwen2-vl-72B? #2730

cqray1990 opened this issue Jan 2, 2025 · 13 comments
Milestone

Comments

@cqray1990
Copy link

System Info / 系統信息

uabntu20.04

Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?

  • docker / docker
  • pip install / 通过 pip install 安装
  • installation from source / 从源码安装

Version info / 版本信息

xinference 1.1.1

The command used to start Xinference / 用以启动 xinference 的命令

xinference-local --host 0.0.0.0 --port 9997

xinference didn't support qwen2-vl-72B?

Reproduction / 复现过程

xinference only supports qwen-vl-chat, cogvlm2, glm-4v, MiniCPM-V-2.6 for batching. Your model Qwen2-vl-72B with model family qwen2-vl-instruct is disqualified.

Expected behavior / 期待表现

xinference-local --host 0.0.0.0 --port 9997

@XprobeBot XprobeBot added this to the v1.x milestone Jan 2, 2025
@948024326
Copy link

image
image
Support,bro

@cqray1990
Copy link
Author

cqray1990 commented Jan 3, 2025

image image Support,bro

751762780

but no vLLM item, Qwen2-vl-72 is custom model, loaded from local model

Uploading 231345107.jpg…

@948024326
Copy link

try to degrade the version like 0.16.1 or 0.16.3?

@948024326
Copy link

try to degrade the version like 0.16.1 or 0.16.3?

@cqray1990

@cqray1990
Copy link
Author

cqray1990 commented Jan 3, 2025

try to degrade the version like 0.16.1 or 0.16.3?

@cqray1990

@948024326 ok, i will try,transformers it not supported ? vllm can? why the latest version is not supported

@948024326
Copy link

try to degrade the version like 0.16.1 or 0.16.3?

@cqray1990

@948024326 ok, i will try,transformers it not supported ? vllm can? why the latest version is not supported

both support, if lower version still do not support qwen2-vl series, then ask the author for help @qinxuye

@cqray1990
Copy link
Author

try to degrade the version like 0.16.1 or 0.16.3?

@cqray1990

@948024326 ok, i will try,transformers it not supported ? vllm can? why the latest version is not supported

both support, if lower version still do not support qwen2-vl series, then ask the author for help @qinxuye

i have tried the 0.16.3, the same error,

@948024326
Copy link

try to degrade the version like 0.16.1 or 0.16.3?

@cqray1990

@948024326 ok, i will try,transformers it not supported ? vllm can? why the latest version is not supported

both support, if lower version still do not support qwen2-vl series, then ask the author for help @qinxuye

i have tried the 0.16.3, the same error,

this is my result, if don't match, maybe some error in your system

image
image
image

@qinxuye
Copy link
Contributor

qinxuye commented Jan 3, 2025

vllm should be greater than 0.6.3, please check the version.

@cqray1990
Copy link
Author

cqray1990 commented Jan 3, 2025

vllm should be greater than 0.6.3, please check the version.

transformers version is 4.47,, did it? transformers 4.47 support qwen2-vl,so i install this version

@qinxuye
Copy link
Contributor

qinxuye commented Jan 3, 2025

vllm should be greater than 0.6.3, please check the version.

transformers version is 4.47,, did it? transformers 4.47 support qwen2-vl,so i install this version

transformers>=4.45.0 is ok.

@cqray1990
Copy link
Author

cqray1990 commented Jan 3, 2025

vllm should be greater than 0.6.3, please check the version.

transformers version is 4.47,, did it? transformers 4.47 support qwen2-vl,so i install this version

transformers>=4.45.0 is ok.

@qinxuye
but when i register model with Qwen2-vl-72B

771562652

test template is wrong,and i try to ignore this , but model run ,and test, it's run error

@qinxuye
Copy link
Contributor

qinxuye commented Jan 3, 2025

If you inherit the internal model you don’t need to provide template.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants