You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched all issues/PRs to ensure it has not already been reported or fixed.
English interface (or at least English documentation)
Alternative, forked or prerelease version of an existing package
Fairly standard install (e.g. no elaborate pre/post install scripts)
Name
koboldcpp-cu12
Description
"A simple one-file way to run various GGML models like LLAMA, ALPACA, VICUNA. This version uses the newer CUDA 12 binaries. If you have a newer NVIDIA GPU and don't mind larger files, you may get increased speeds by using this new version."
Criteria
Name
koboldcpp-cu12
Description
"A simple one-file way to run various GGML models like LLAMA, ALPACA, VICUNA. This version uses the newer CUDA 12 binaries. If you have a newer NVIDIA GPU and don't mind larger files, you may get increased speeds by using this new version."
Homepage
https://github.com/lostruins/koboldcpp
Download Link(s)
https://github.com/LostRuins/koboldcpp/releases/download/{latest-tag}/koboldcpp_cu12.exe
Eg.: https://github.com/LostRuins/koboldcpp/releases/download/v1.66/koboldcpp_cu12.exe
Some Indication of Popularity/Repute
Over 3.9k stars
282 forks
Same regular install as the standard
koboldcpp
, just a new executable with newer cuda binaries.Corresponding package
extras/koboldcpp
The text was updated successfully, but these errors were encountered: