Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Continue Fix this Code / Optimize this Code does not work with vLLM #457

Open
lukehinds opened this issue Dec 27, 2024 · 1 comment
Open
Labels

Comments

@lukehinds
Copy link
Contributor

Describe the issue

I believe this is an upstream issue and not much we can do, but tracking anyhow.

continuedev/continue#3021

When using vLLM, none of features mentioned in the title work, instead a failure is seen.

Image

Steps to Reproduce

Use a vLLM provider in codegate and attempt to use Fix / Optimize in continue

Operating System

MacOS (Arm)

IDE and Version

Version: 1.96.2 (Universal)

Extension and Version

0.9.245

Provider

vLLM

Model

Qwen/Qwen2.5-Coder-14B-Instruct

Logs

No response

Additional Context

No response

@aponcedeleonch
Copy link
Contributor

Just FYI, the workaround mentioned in this issue continuedev/continue#2388 worked the last time I checked. Basically is changing the provider to openai instead of vllm.

{
  "title": "Qwen2.5-Coder-7b-Instruct",
  "provider": "openai",
  "model": "Qwen2.5-Coder-7B-Instruct",
  "apiBase": "<some_api>"
}

@yrobla yrobla assigned yrobla and unassigned yrobla Jan 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants