Closed
Description
Describe the solution you'd like
I would like to be able to install llama-cpp-python without building llama.cpp and simply set a variable to the folder of an already build llama.cpp build.
Describe alternatives you've considered
I tried building llama.cpp the way I want (on Windows and with cuBLAS) but it fails when being doing though llama-cpp-python, although it works fine when building llama.cpp as a standalone.
Additional context
Even installing a llama-cpp-python in the basic CPU only way would be OK if it would be easy to swap the llama.cpp build afterwards.