Description
Expected Behavior
The llama-python-cpp should update and build.
Current Behavior
I cannot build except 1.59 (which I just tried due to a few suggestions from a similar apparent bug in 1.60) .I tried 2.26 to 2.10 manually, one-at-a-time and none build. All fail at step 9 of 23.
I use, for example, install/update for Llama.cpp Python.
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python==0.2.17 --upgrade --force-reinstall --no-cache-dir
I have tried several versions including the current. All Fail on task 9 0f 23
[9/23] /usr/bin/c++ -DGGML_USE_CUBLAS -DGGML_USE_HIPBLAS -DLLAMA_BUILD -DLLAMA_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -D__HIP_PLATFORM_AMD__=1 -D__HIP_PLATFORM_HCC__=1 -Dllama_EXPORTS -I/tmp/pip-install-sbxror78/llama-cpp-python_fd8adbe6d6ce4076a81f5669a940df3d/vendor/llama.cpp/. -isystem /opt/rocm/include -isystem /opt/rocm-5.7.0/include -O3 -DNDEBUG -std=gnu++11 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -march=native -MD -MT vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -c /tmp/pip-install-sbxror78/llama-cpp-python_fd8adbe6d6ce4076a81f5669a940df3d/vendor/llama.cpp/llama.cpp
ninja: build stopped: subcommand failed.
*** CMake build failed
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
Environment and Context
I have a AMD Radeon 6650M GPU with the freshly installed AMD 5.7.00.48.50700 drivers with AMD usecases: graphics,rocm.
amdgpu-install_5.7.00.48.50700-1_all.deb
The system is POP OS 22.04 with all patches and updates to 2024-01-05.
- Physical (or virtual) hardware you are using, e.g. for Linux:
`lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 6800H with Radeon Graphics
CPU family: 25
Model: 68
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
CPU max MHz: 4785.0000
CPU min MHz: 400.0000
BogoMIPS: 6388.01
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mc
a cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall n
x mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_go
od nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl p
ni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2api
c movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_le
gacy svm extapic cr8_legacy abm sse4a misalignsse 3dnow
prefetch osvw ibs skinit wdt tce topoext perfctr_core p
erfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw
_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1
avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap c
lflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cq
m_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_sh
stk clzero irperf xsaveerptr rdpru wbnoinvd cppc arat n
pt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushby
asid decodeassists pausefilter pfthreshold avic v_vmsav
e_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqd
q rdpid overflow_recov succor smca fsrm debug_swap
Virtualization features:
Virtualization: AMD-V
Caches (sum of all):
L1d: 256 KiB (8 instances)
L1i: 256 KiB (8 instances)
L2: 4 MiB (8 instances)
L3: 16 MiB (1 instance)
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerabilities:
Gather data sampling: Not affected
Itlb multihit: Not affected
L1tf: Not affected
Mds: Not affected
Meltdown: Not affected
Mmio stale data: Not affected
Retbleed: Not affected
Spec rstack overflow: Vulnerable: Safe RET, no microcode
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer
sanitization
Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIB
P always-on, RSB filling, PBRSB-eIBRS Not affected
Srbds: Not affected
Tsx async abort: Not affected
`
- Operating System, e.g. for Linux:
uname -a Linux pop-os 6.6.6-76060606-generic #202312111032~1702306143~22.04~d28ffec SMP PREEMPT_DYNAMIC Mon D x86_64 x86_64 x86_64 GNU/Linux
- SDK version, e.g. for Linux:
$ python3 --version Python 3.10.12
$ make --version
GNU Make 4.3
Built for x86_64-pc-linux-gnu
$ g++ --version
g++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Failure Information (for bugs)
Steps to Reproduce
Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.
- CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python --upgrade --force-reinstall --no-cache-dir
This failed with the error above. Step 9. - CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python==0.1.59 --upgrade --force-reinstall --no-cache-dir
This builds BUT cannot use any current models in GGUF format due to incompatibility. I only tried this per another report of a similar issue with 1.60. - Tried backing-off versions from 2.26 to 2.10 using variations CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python==0.2.xx --upgrade --force-reinstall --no-cache-dir
This failed with the error above. Step 9.
Try the following (FAILS):
git clone https://github.com/abetlen/llama-cpp-python
cd llama-cpp-python
rm -rf _skbuild/
# delete any old buildspython -m pip install .
Generates error
`
python -m pip install .
Defaulting to user installation because normal site-packages is not writeable
Processing /media/wind/llama-cpp-python
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: typing-extensions>=4.5.0 in /home/shannon/.local/lib/python3.10/site-packages (from llama_cpp_python==0.2.27) (4.9.0)
Requirement already satisfied: diskcache>=5.6.1 in /home/shannon/.local/lib/python3.10/site-packages (from llama_cpp_python==0.2.27) (5.6.3)
Requirement already satisfied: numpy>=1.20.0 in /home/shannon/.local/lib/python3.10/site-packages (from llama_cpp_python==0.2.27) (1.26.3)
Building wheels for collected packages: llama_cpp_python
Building wheel for llama_cpp_python (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for llama_cpp_python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [52 lines of output]
*** scikit-build-core 0.7.1 using CMake 3.28.1 (wheel)
*** Configuring CMake...
loading initial cache file /tmp/tmp4k5uypsv/build/CMakeInit.txt
-- The C compiler identification is GNU 11.4.0
-- The CXX compiler identification is GNU 11.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Error at CMakeLists.txt:20 (add_subdirectory):
The source directory
/media/wind/llama-cpp-python/vendor/llama.cpp
does not contain a CMakeLists.txt file.
CMake Error at CMakeLists.txt:21 (install):
install TARGETS given target "llama" which does not exist.
CMake Error at CMakeLists.txt:30 (install):
install TARGETS given target "llama" which does not exist.
CMake Error at CMakeLists.txt:50 (add_subdirectory):
add_subdirectory given source "vendor/llama.cpp/examples/llava" which is
not an existing directory.
CMake Error at CMakeLists.txt:51 (set_target_properties):
set_target_properties Can not find target to add properties to:
llava_shared
CMake Error at CMakeLists.txt:56 (install):
install TARGETS given target "llava_shared" which does not exist.
CMake Error at CMakeLists.txt:65 (install):
install TARGETS given target "llava_shared" which does not exist.
-- Configuring incomplete, errors occurred!
*** CMake configuration failed
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama_cpp_python
Failed to build llama_cpp_python
ERROR: Could not build wheels for llama_cpp_python, which is required to install pyproject.toml-based projects
`
-
cd ./vendor/llama.cpp
-
Follow llama.cpp's instructions to
cmake
llama.cpp
I did this and complied using make LLAMA_HIPBLAS=1.
The first run received a math missing include error in CPP. I installed libstdc++-12-ddev and this error ewent away. -
Run llama.cpp's
./main
with the same arguments you previously passed to llama-cpp-python and see if you can reproduce the issue. If you can, log an issue with llama.cpp
I am not sure how to do this. If I run ./make with various arguments I get a help showing some syntax error.