Skip to content

Commit 86c526a

Browse files
committed
llama : update docs about llama_decode
ggml-ci
1 parent 6390125 commit 86c526a

File tree

1 file changed

+6
-3
lines changed

1 file changed

+6
-3
lines changed

include/llama.h

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -943,9 +943,12 @@ extern "C" {
943943
// Requires KV cache.
944944
// For encode-decoder contexts, processes the batch using the decoder.
945945
// Positive return values does not mean a fatal error, but rather a warning.
946-
// 0 - success
947-
// 1 - could not find a KV slot for the batch (try reducing the size of the batch or increase the context)
948-
// < 0 - error. the KV cache state is restored to the state before this call
946+
// Upon non-zero return values, the KV cache state is restored to the state before this call
947+
// 0 - success
948+
// 1 - could not find a KV slot for the batch (try reducing the size of the batch or increase the context)
949+
// 2 - aborted
950+
// -1 - invalid input batch
951+
// < -1 - error
949952
LLAMA_API int32_t llama_decode(
950953
struct llama_context * ctx,
951954
struct llama_batch batch);

0 commit comments

Comments
 (0)