Releases: LostRuins/koboldcpp
CUDA 11 Cublas Libraries
This release is NOT a proper koboldcpp build!
It only contains the CUDA11 CuBLAS libraries to be packaged with KoboldCpp pyinstallers, intended for CI usage.
If you're looking for KoboldCpp, please get it from here: https://github.com/LostRuins/koboldcpp/releases/latest
koboldcpp-1.61.2
koboldcpp-1.61.2
Finally multimodal edition
- NEW: KoboldCpp now supports Vision via Multimodal Projectors (aka LLaVA), allowing it to perceive and react to images! Load a suitable
--mmproj
file or select it in the GUI launcher to use vision capabilities. (Not working on Vulkan)- Note: This is NOT limited to only LLaVA models, any compatible model of the same size and architecture can gain vision capabilities!
- Simply grab a 200mb mmproj file for your architecture here, load it with
--mmproj
and stick it into your favorite compatible model, and it will be able to see images as well! - KoboldCpp supports passing up to 4 images, each one will consume about 600 tokens of context (LLaVA 1.5). Additionally, KoboldCpp token fast-forwarding and context-shifting works with images seamlessly, so you only need to process each image once!
- A compatible OpenAI GPT-4V API endpoint is emulated, so GPT-4-Vision applications should work out of the box (e.g. for SillyTavern in Chat Completions mode, just enable it). For Kobold API and OpenAI Text-Completions API, passing an array of base64 encoded
images
in the submit payload will work as well (planned Aphrodite compatible format). - An A1111 compatible
/sdapi/v1/interrogate
endpoint is also emulated, allowing easy captioning for other image-interrogation frontends. - In Kobold Lite, click any image to select from available AI Vision options.
- NEW: Support for authentication via API Keys has been added, set it with
--password
. This key will be required for all text generation endpoints, usingBearer
Authorization. Image endpoints are not secured. - Proper support for generating non-square images, scaling correctly based on aspect ratio
--benchmark
limit increased to 16k context- Added aliases for the image sampler names for txt2img generation.
- Added the
clamped
option for--sdconfig
which prevents generating too large resolutions and potentially crashing due to OOM. - Pulled and merged improvements and fixes from upstream
- Includes support for mamba models, (CPU only). Note: mamba does not support context shifting
- Updated Kobold Lite:
- Added better support for displaying larger images, added support for generating portrait and landscape aspect ratios
- Increased max image resolution in HD mode, allow downloading non-square images properly
- Added ability to choose image samplers for image generation
- Added ability to upload images to KoboldCpp for LLaVA usage, with 4 selectable "AI Vision" modes
- Allow inserting images from files even when no image generation backend is selected
- Added support for password input and using API keys over KoboldAI API
Fix 1.61.1 - Fixed mamba (removed broken context shifting), merged other fixes from upstream, support uploading non-square images.
Fix 1.61.2 - Added new launch flag --ignoremissing
which deliberately ignores any optional missing files that were passed in, e.g. --lora
, --mmproj
, skipping them instead of exiting. Also, paste image from clipboard is added to lite.
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here
Run it from the command line with the desired launch parameters (see --help
), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001
For more information, be sure to run the program from command line with the --help
flag.
koboldcpp-1.60.1
koboldcpp-1.60.1
KoboldCpp is just a 'Dirty Fork' edition
- KoboldCpp now natively supports Local Image Generation, thanks to the phenomenal work done by @leejet in stable-diffusion.cpp! It provides an A1111 compatible
txt2img
endpoint which you can use within the embedded Kobold Lite, or in many other compatible frontends such as SillyTavern.- Just select a compatible SD1.5 or SDXL
.safetensors
fp16 model to load, either through the GUI launcher or with--sdconfig
- Enjoy zero install, portable, lightweight and hassle free image generation directly from KoboldCpp, without installing multi-GBs worth of ComfyUi, A1111, Fooocus or others.
- With just 8GB VRAM GPU, you can run both a 7B q4 GGUF (lowvram) alongside any SD1.5 image model at the same time, as a single instance, fully offloaded. If you run out of VRAM, select
Compress Weights (quant)
to quantize the image model to take less memory. - KoboldCpp allows you to run in text-gen-only, image-gen-only or hybrid modes, simply set the appropriate launcher configs.
- Known to not work correctly in Vulkan (for now).
- Just select a compatible SD1.5 or SDXL
- When running from command line,
--contextsize
can now be set to any arbitrary number in range instead of locked to fixed values. However, using a non-recommended value may result in incoherent output depending on your settings. The GUI launcher for this remains unchanged. - Added new quant types, pulled and merged improvements and fixes from upstream.
- Fixed some issues loading older GGUFv1 models, they should be working again.
- Added cloudflare tunnel support for macOS, (via
--remotetunnel
, however it probably won't work on M1, only amd64). - Updated API docs and Colab for image gen.
- Updated Kobold Lite:
- Integrated support for AllTalk TTS
- Added "Auto Jailbreak" for instruct mode, useful to wrangle stubborn or censored models.
- Auto enable image gen button if KCPP loads image model
- Improved Autoscroll and layout, defaults to SSE streaming mode
- Added option to import and export story via clipboard
- Added option to set personal notes/comments in story
Update v1.60.1: Port fix for CVE-2024-21836 for GGUFv1, enabled LCM sampler, allowed loading gguf SD models, fix SD for metal.
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here
Run it from the command line with the desired launch parameters (see --help
), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001
For more information, be sure to run the program from command line with the --help
flag.
koboldcpp-1.59.1
koboldcpp-1.59.1
This is mostly a bugfix release to resolve multiple minor issues.
- Added
--nocertify
mode which allows you to disable SSL certificate checking on your embedded Horde worker. This can help bypass some SSL certificate errors. - Fixed pre-gguf models loading with incorrect thread counts. This issue affected the past 2 versions.
- Added build target for Old CPU (NoAVX2) Vulkan support.
- Fixed cloudflare remotetunnel URLs not displaying on runpod.
- Reverted CLBlast back to 1.6.0, pending CNugteren/CLBlast#533 and other correctness fixes.
- Smartcontext toggle is now hidden when contextshift toggle is on.
- Various improvements and bugfixes merged from upstream, which includes google gemma support.
- Bugfixes and updates for Kobold Lite
Fix for 1.59.1: Changed makefile build flags, fix for tooltips, merged IQ3_S support
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here
Run it from the command line with the desired launch parameters (see --help
), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001
For more information, be sure to run the program from command line with the --help
flag.
koboldcpp-1.58
koboldcpp-1.58
- Added a toggle for row split mode with cuda multigpu. Split mode changed to layer split by default. If using command line, add
rowsplit
to--usecublas
to enable row split mode. With the GUI launcher, it's a checkbox toggle. - Multiple bugfixes: fixed benchmark command, fixed SSL streaming issues, fixed some SSE formatting with OAI endpoints.
- Make context shifting more forgiving when determining eligibility.
- Upgraded CLBlast to latest version, should result in a modest prompt processing speedup when using CL.
- Various improvements and bugfixes merged from upstream.
- Updated Kobold Lite with many improvements and new features:
- New: Integrated 'AI Vision' for images, this uses AI Horde or a local A1111 endpoint to perform image interrogation, allowing the AI to recognize and interpret uploaded or generated images. This should provide an option for multimodality similar to llava, although not as precise. Click on any image and you can enable it within Lite. This functionality is not provided by KCPP itself.
- New: Importing characters from Pygmalion.Chat is now supported in Lite, select it from scenarios.
- Added option to run Lite in background. It plays a dynamically generated silent audio sound. This should prevent the browser tab from hibernating.
- Fixed printable view, persist streaming text on error, fixed instruct timestamps
- Added "Auto" option for idle responses.
- Allow importing images into story from local disk
- Multiple minor formatting and bug fixes.
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here
Run it from the command line with the desired launch parameters (see --help
), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001
For more information, be sure to run the program from command line with the --help
flag.
koboldcpp-1.57.1
koboldcpp-1.57.1
- Added a benchmarking feature with
--benchmark
, which automatically runs a benchmark with your provided settings, outputting run parameters, timing and speed information as well as testing for coherence, and exiting on completion. You can provide a filename e.g.--benchmark result.csv
and it will write CSV formatted data appended to that file. - Added temperature Quad-Sampling (set via API with parameter
smoothing_factor
) PR from @AAbushady, (credits @kalomaze). - Improved timing displays. Also, displays the seed used, and also shows llama.cpp styled timings when run in
--debugmode
. The timings will appear faster as they do not include overheads, measuring only specific eval functions. - Improved abort generation behavior (allows second user aborting while in queue)
- Vulkan enhancements from @0cc4m merged: APU memory handling and multigpu. To use multigpu, you can now specify additional IDs, for example
--usevulkan 0 2 3
which will use GPUs with IDs0
,2
, and3
. Allocation is determined by--tensor_split
. Multigpu for Vulkan is currently configurable via commandline only, the GUI launcher does not allow selecting multiple devices for Vulkan. - Various improvements and bugfixes merged from upstream.
- Updated Kobold Lite with many improvements and new features:
- NEW: The Aesthetic UI is now available for Story and Adventure modes as well!
- Added "AI Impersonate" feature for Instruct mode.
- Smoothing factor added, can be configured in dynamic temperature panel.
- Added a toggle to enable printable view (unlock vertical scrolling).
- Added a toggle to inject timestamps, allowing the AI to be aware of time passing.
- Persist API info for A1111 and XTTS, allows specifying custom negative prompts for image gen, allows specifying custom horde keys in KCPP mode.
- Fixes for XTTS to handle devices with over 100 voices, and also adds an option to narrate dialogue only.
- Toggle to request A1111 backend to save generated images to disk.
- Fix for chub.ai card fetching.
Hotfix1.57.1: Fixed some crashes and fixed multigpu for vulkan.
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here
Run it from the command line with the desired launch parameters (see --help
), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001
For more information, be sure to run the program from command line with the --help
flag.
koboldcpp-1.56
koboldcpp-1.56
- NEW: Added early support for new Vulkan GPU backend by @0cc4m. You can try it out with the command
--usevulkan (gpu id)
or via the GUI launcher. Now included with the Windows and Linux prebuilt binaries. (Note: Mixtral on Vulkan not fully supported) - Updated and merged the new GGML backend rework from upstream. This update includes many extensive fixes, improvements and changes across over a hundred commits. Support for earlier non-gguf models has been preserved via a fossilized earlier version of the library. Please open an issue if you encounter problems. The Wiki and Readme have been updated too.
- Added support for setting
dynatemp_exponent
, previously was defaulted at 1.0. Support added over API and in Lite. - Fixed issues with Linux CUDA on Pascal, added more flags to handle conda and colab builds correctly.
- Added support for Old CPU fallbacks (NoAVX2 and Failsafe modes) in build targets in the Linux prebuilt binary (and koboldcpp.sh)
- Added missing 48k context option, fixed clearing file selection, better abort handling support, fixed aarch64 termux builds, various other fixes.
- Updated Kobold Lite with many improvements and new features:
- NEW: Added XTTS API Server support (Local AI powered text-to-speech).
- Added option to let AI impersonate you for a turn in a chat.
- HD image generation options.
- Added popup-on-complete browser notification options.
- Improved DynaTemp wizard, added options to set exponent
- Bugfixes, padding adjustments, A1111 parameter fixes, image color fixes for invert color mode.
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here
Run it from the command line with the desired launch parameters (see --help
), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001
For more information, be sure to run the program from command line with the --help
flag.
koboldcpp-1.55.1
koboldcpp-1.55.1
- Added Dynamic Temperature (DynaTemp), which is specified by a Temperature Value and a Temperature Range (Credits: @kalomaze). When used, the actual temperature is allowed to be automatically adjusted dynamically between
DynaTemp ± DynaTempRange
. For example, settingtemperature=0.4
anddynatemp_range=0.1
will result in a minimum temp of 0.3 and max of 0.5. For ease of use, a UI to select min and max temperature for dynatemp directly is also provided in Lite. Both inputs will work and auto update the other. - Try to reuse cloudflared file when running remote tunnel, but also handle if cloudflared fails to download correctly.
- Added a field to show the most recently used seed in the perf endpoint
- Switched cuda pool malloc back to the old implementation
- Updated Lite, added support for DynaTemp
- Merged new improvements and fixes from upstream llama.cpp
- Various minor fixes.
v1.55.1 - Trying to fix some cuda issues on Pascal cards. As I don't have a Pascal card I cannot verify - but try this if 1.55 didn't work.
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here
Run it from the command line with the desired launch parameters (see --help
), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001
For more information, be sure to run the program from command line with the --help
flag.
koboldcpp-1.54
koboldcpp-1.54
welcome to 2024 edition
- Added
logit_bias
support (for both OpenAI and Kobold APIs. Accepts a dictionary of key-value pairs, which indicate the token IDs (int) and logit bias (float) to apply for that token. Object format is the same as and compatible with the official OpenAI implementation, though token IDs are model specific. (thanks @DebuggingLife46) - Updated Lite, added support for custom background images (thanks @Ar57m), and added customizable settings for stepcount and cfgscale for Horde/A1111 image generation.
- Added mouseover tooltips for all labels in the GUI launcher.
- Cleaned up and simplified the UI of the quick launch tab in the GUI launcher, some advanced options moved to other tabs.
- Bug fixes for garbled output in Termux with q5k Phi
- Fixed paged memory fallback when pinned memory alloc fails while not using mmap.
- Attempt to fix on-exit segfault on some Linux systems.
- Updated KAI United
class.py
, added new parameters. - Makefile fix for Linux CI build using conda (thanks @henk717)
- Merged new improvements and fixes from upstream llama.cpp (includes VMM pool support)
- Included prebuilt binary for no-cuda Linux as well.
- Various minor fixes.
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here
Run it from the command line with the desired launch parameters (see --help
), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001
For more information, be sure to run the program from command line with the --help
flag.
koboldcpp-1.53
koboldcpp-1.53
- Added support for SSL. You can now import your own SSL cert to use with KoboldCpp and serve it over HTTPS with
--ssl [cert.pem] [key.pem]
or via the GUI. The.pem
files must be unencrypted, you can also generate them with OpenSSL, eg.openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 365 -config openssl.cnf -nodes
for your own self signed certificate. - Added support for presence penalty (alternative rep pen) over the KAI API and in Lite. If Presence Penalty is set over the OpenAI API, and
rep_pen
is not set, thenrep_pen
will be set to a default of 1.0 instead of 1.1. Both penalties can be used together, although this is probably not a good idea. - Added fixes for Broken Pipe error, thanks @mahou-shoujo.
- Added fixes for aborting ongoing connections while streaming in SillyTavern.
- Merged upstream support for Phi models and speedups for Mixtral
- The default non-blas batch size for GGUF models is now increased from 8 to 32.
- Merged HIPBlas fixes from @YellowRoseCx
- Fixed an issue with building convert tools in 1.52
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here
Run it from the command line with the desired launch parameters (see --help
), or manually select the model in the GUI.
and then once loaded, you can connect like this (or use the full koboldai client):
http://localhost:5001
For more information, be sure to run the program from command line with the --help
flag.