Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When llava-v1.6 models w8a8_int8 quantization can be supported? #990

Closed
wuyu1028 opened this issue Dec 18, 2024 · 1 comment · Fixed by #914
Closed

When llava-v1.6 models w8a8_int8 quantization can be supported? #990

wuyu1028 opened this issue Dec 18, 2024 · 1 comment · Fixed by #914
Assignees
Labels
enhancement New feature or request

Comments

@wuyu1028
Copy link

any plan for llava-v1.6 models be supported by w8a8_int8 ?

@wuyu1028 wuyu1028 added the enhancement New feature or request label Dec 18, 2024
@kylesayrs
Copy link
Collaborator

Hi @wuyu1028,

You can check out the llm-compressor kylesayrs/gptq-partition branch and the compressed-tensors main branch. These changes will allow you to quantize multimodal vision models and have been tested with llava-1.5-7b-hf.

These changes will be made available with the next llm-compressor release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants