Skip to content

Commit 9d2be06

Browse files
compiladeNexesenex
authored andcommitted
llama : fix qs.n_attention_wv for DeepSeek-V2 (ggml-org#9156)
1 parent 1a6dabb commit 9d2be06

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

src/llama.cpp

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17902,7 +17902,8 @@ static void llama_model_quantize_internal(const std::string & fname_inp, const s
1790217902

1790317903
// TODO: avoid hardcoded tensor names - use the TN_* constants
1790417904
if (name.find("attn_v.weight") != std::string::npos ||
17905-
name.find("attn_qkv.weight") != std::string::npos) {
17905+
name.find("attn_qkv.weight") != std::string::npos ||
17906+
name.find("attn_kv_b.weight")!= std::string::npos) {
1790617907
++qs.n_attention_wv;
1790717908
} else if (name.find("attn_k.weight") != std::string::npos) {
1790817909
++qs.n_attention_wk;

0 commit comments

Comments
 (0)