We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
标题:需求 Xinference 能够在docker 的介绍里面将cuda12.4 修改为最小支持cuda12.2
理由: 现在大家用的卡很多都是老卡,如2080ti,2060,这些卡是没有办法安装cuda12.4的,因此希望产品侧修改产品的部署门槛,降低一下下,将会收获数以万计的用户,感谢!
美丽团技术团队 2024-12-28
动机:团队目前的卡存在部署障碍
我的贡献是在很多技术群分享Xinference,Xinference因为我的分享得到了100+ star
The text was updated successfully, but these errors were encountered:
This issue is stale because it has been open for 7 days with no activity.
Sorry, something went wrong.
No branches or pull requests
Feature request / 功能建议
标题:需求 Xinference 能够在docker 的介绍里面将cuda12.4 修改为最小支持cuda12.2
理由:
现在大家用的卡很多都是老卡,如2080ti,2060,这些卡是没有办法安装cuda12.4的,因此希望产品侧修改产品的部署门槛,降低一下下,将会收获数以万计的用户,感谢!
美丽团技术团队
2024-12-28
Motivation / 动机
动机:团队目前的卡存在部署障碍
Your contribution / 您的贡献
我的贡献是在很多技术群分享Xinference,Xinference因为我的分享得到了100+ star
The text was updated successfully, but these errors were encountered: