Skip to content

Commit cb13d73

Browse files
authored
server : docs fix default values and add n_probs (#3506)
1 parent 9ca79d5 commit cb13d73

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

examples/server/README.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -114,9 +114,9 @@ node index.js
114114

115115
`top_k`: Limit the next token selection to the K most probable tokens (default: 40).
116116

117-
`top_p`: Limit the next token selection to a subset of tokens with a cumulative probability above a threshold P (default: 0.9).
117+
`top_p`: Limit the next token selection to a subset of tokens with a cumulative probability above a threshold P (default: 0.95).
118118

119-
`n_predict`: Set the number of tokens to predict when generating text. **Note:** May exceed the set limit slightly if the last token is a partial multibyte character. When 0, no tokens will be generated but the prompt is evaluated into the cache. (default: 128, -1 = infinity).
119+
`n_predict`: Set the number of tokens to predict when generating text. **Note:** May exceed the set limit slightly if the last token is a partial multibyte character. When 0, no tokens will be generated but the prompt is evaluated into the cache. (default: -1, -1 = infinity).
120120

121121
`n_keep`: Specify the number of tokens from the initial prompt to retain when the model resets its internal context.
122122
By default, this value is set to 0 (meaning no tokens are kept). Use `-1` to retain all tokens from the initial prompt.
@@ -156,6 +156,8 @@ node index.js
156156

157157
`logit_bias`: Modify the likelihood of a token appearing in the generated text completion. For example, use `"logit_bias": [[15043,1.0]]` to increase the likelihood of the token 'Hello', or `"logit_bias": [[15043,-1.0]]` to decrease its likelihood. Setting the value to false, `"logit_bias": [[15043,false]]` ensures that the token `Hello` is never produced (default: []).
158158

159+
`n_probs`: If greater than 0, the response also contains the probabilities of top N tokens for each generated token (default: 0)
160+
159161
- **POST** `/tokenize`: Tokenize a given text.
160162

161163
*Options:*

0 commit comments

Comments
 (0)