Skip to content

ggml : riscv: add xtheadvector support #13720

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
May 27, 2025
Merged

Conversation

xctan
Copy link
Contributor

@xctan xctan commented May 23, 2025

This PR builds upon #12530 to introduce k-quant support for the older RVV v0.7.1 implementation (xtheadvector).

Additionally, it updates zfh extension detection to use the built-in compiler macro, eliminating the need for an extra definition.

Evaluation

Build instruction

mkdir build
cd build
cmake .. -DGGML_RVV=1 -DGGML_XTHEADVECTOR=1 -DGGML_RV_ZFH=0
make -j$(nproc)

Verification

Test model: gemma-3-4b-it-GGUF, Q4_K_M quantization. The results of llama-perplexity are:

#12530 (rvv 1.0) this PR (xtheadvector)
16.7120 +/- 0.16651 16.7120 +/- 0.16651

Performance

Using the same model as above on SG2042.

model size params backend threads test t/s note
gemma3 4B Q4_K - Medium 2.31 GiB 3.88 B CPU 32 pp512 15.73 ± 0.14 xtheadvector
gemma3 4B Q4_K - Medium 2.31 GiB 3.88 B CPU 32 pp512 3.35 ± 0.00 scalar
gemma3 4B Q4_K - Medium 2.31 GiB 3.88 B CPU 32 tg128 5.15 ± 0.00 xtheadvector
gemma3 4B Q4_K - Medium 2.31 GiB 3.88 B CPU 32 tg128 2.44 ± 0.00 scalar

@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label May 23, 2025
@xctan
Copy link
Contributor Author

xctan commented May 27, 2025

@ggerganov Gentle ping on this for review when you get a chance.

Copy link
Member

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should figure out a way to rework these ifdef branches so the code is more readable. Not for this PR, just a general note that it is something we should do.

@xctan
Copy link
Contributor Author

xctan commented May 27, 2025

We should figure out a way to rework these ifdef branches so the code is more readable. Not for this PR, just a general note that it is something we should do.

I plan to split the arch-dependent implementations in ggml-cpu-aarch64.cpp and ggml-cpu-quants.c into separate files after this PR is merged.

@ggerganov ggerganov merged commit 05f6ac6 into ggml-org:master May 27, 2025
46 checks passed
@ggerganov
Copy link
Member

We should figure out a way to rework these ifdef branches so the code is more readable. Not for this PR, just a general note that it is something we should do.

I plan to split the arch-dependent implementations in ggml-cpu-aarch64.cpp and ggml-cpu-quants.c into separate files after this PR is merged.

Let me first sync llama.cpp <-> ggml <-> whisper.cpp before starting on this, to avoid conflicts later.

Btw, I wonder if we should first start with renaming the "aarch64" misnomer in the codebase. The code in ggml-cpu-aarch64.cpp started as an optimization specific to AARCH64, but it is no longer the case because the alternative data packings can be used by other architectures. So at some point we should fix the name - better to do it earlier. @slaren Can you confirm this is the case and if it is a good idea to do this now?

@slaren
Copy link
Member

slaren commented May 27, 2025

Yes, I think it would be a good idea to rename it to something like ggml-cpu-repack.cpp. We should also normalize the names of the files, the ggml-cpu- prefix should probably be removed as well.

@ggerganov
Copy link
Member

@xctan Sync is complete. We could use some help with reorganizing the source tree, so feel free to help out. About the aarch64 rename - I think it is best to start with this. Note that there are also variables such as GGML_CPU_AARCH64 that would need to be renamed as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants