Skip to content

Commit 2811014

Browse files
iamlemecabetlen
andauthored
feat: Switch embed to llama_get_embeddings_seq (#1263)
* switch to llama_get_embeddings_seq * Remove duplicate definition of llama_get_embeddings_seq Co-authored-by: Andrei <abetlen@gmail.com> --------- Co-authored-by: Andrei <abetlen@gmail.com>
1 parent 40c6b54 commit 2811014

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

llama_cpp/llama.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -814,7 +814,7 @@ def decode_batch(n_seq: int):
814814

815815
# store embeddings
816816
for i in range(n_seq):
817-
embedding: List[float] = llama_cpp.llama_get_embeddings_ith(
817+
embedding: List[float] = llama_cpp.llama_get_embeddings_seq(
818818
self._ctx.ctx, i
819819
)[:n_embd]
820820
if normalize:

0 commit comments

Comments
 (0)