Skip to content

Commit 42b5781

Browse files
Auto-generated API code (#2714)
1 parent 8174ba5 commit 42b5781

12 files changed

+88
-279
lines changed

docs/doc_examples/120fcf9f55128d6a81d5e87a9c235bbd.asciidoc

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -3,16 +3,17 @@
33

44
[source, js]
55
----
6-
const response = await client.inference.streamInference({
7-
task_type: "chat_completion",
6+
const response = await client.inference.chatCompletionUnified({
87
inference_id: "openai-completion",
9-
model: "gpt-4o",
10-
messages: [
11-
{
12-
role: "user",
13-
content: "What is Elastic?",
14-
},
15-
],
8+
chat_completion_request: {
9+
model: "gpt-4o",
10+
messages: [
11+
{
12+
role: "user",
13+
content: "What is Elastic?",
14+
},
15+
],
16+
},
1617
});
1718
console.log(response);
1819
----

docs/doc_examples/13ecdf99114098c76b050397d9c3d4e6.asciidoc

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,7 @@
33

44
[source, js]
55
----
6-
const response = await client.inference.inference({
7-
task_type: "sparse_embedding",
6+
const response = await client.inference.sparseEmbedding({
87
inference_id: "my-elser-model",
98
input:
109
"The sky above the port was the color of television tuned to a dead channel.",

docs/doc_examples/141ef0ebaa3b0772892b79b9bb85efb0.asciidoc

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,8 @@
33

44
[source, js]
55
----
6-
const response = await client.inference.put({
7-
task_type: "my-inference-endpoint",
8-
inference_id: "_update",
6+
const response = await client.inference.update({
7+
inference_id: "my-inference-endpoint",
98
inference_config: {
109
service_settings: {
1110
api_key: "<API_KEY>",

docs/doc_examples/45954b8aaedfed57012be8b6538b0a24.asciidoc

Lines changed: 30 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -3,42 +3,43 @@
33

44
[source, js]
55
----
6-
const response = await client.inference.streamInference({
7-
task_type: "chat_completion",
6+
const response = await client.inference.chatCompletionUnified({
87
inference_id: "openai-completion",
9-
messages: [
10-
{
11-
role: "user",
12-
content: [
13-
{
14-
type: "text",
15-
text: "What's the price of a scarf?",
8+
chat_completion_request: {
9+
messages: [
10+
{
11+
role: "user",
12+
content: [
13+
{
14+
type: "text",
15+
text: "What's the price of a scarf?",
16+
},
17+
],
18+
},
19+
],
20+
tools: [
21+
{
22+
type: "function",
23+
function: {
24+
name: "get_current_price",
25+
description: "Get the current price of a item",
26+
parameters: {
27+
type: "object",
28+
properties: {
29+
item: {
30+
id: "123",
31+
},
32+
},
33+
},
1634
},
17-
],
18-
},
19-
],
20-
tools: [
21-
{
35+
},
36+
],
37+
tool_choice: {
2238
type: "function",
2339
function: {
2440
name: "get_current_price",
25-
description: "Get the current price of a item",
26-
parameters: {
27-
type: "object",
28-
properties: {
29-
item: {
30-
id: "123",
31-
},
32-
},
33-
},
3441
},
3542
},
36-
],
37-
tool_choice: {
38-
type: "function",
39-
function: {
40-
name: "get_current_price",
41-
},
4243
},
4344
});
4445
console.log(response);

docs/doc_examples/4b91ad7c9b44e07db4a4e81390f19ad3.asciidoc

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,7 @@
33

44
[source, js]
55
----
6-
const response = await client.inference.streamInference({
7-
task_type: "completion",
6+
const response = await client.inference.streamCompletion({
87
inference_id: "openai-completion",
98
input: "What is Elastic?",
109
});

docs/doc_examples/7429b16221fe741fd31b0584786dd0b0.asciidoc

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,7 @@
33

44
[source, js]
55
----
6-
const response = await client.inference.inference({
7-
task_type: "text_embedding",
6+
const response = await client.inference.textEmbedding({
87
inference_id: "my-cohere-endpoint",
98
input:
109
"The sky above the port was the color of television tuned to a dead channel.",

docs/doc_examples/82bb6c61dab959f4446dc5ecab7ecbdf.asciidoc

Lines changed: 23 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -3,30 +3,31 @@
33

44
[source, js]
55
----
6-
const response = await client.inference.streamInference({
7-
task_type: "chat_completion",
6+
const response = await client.inference.chatCompletionUnified({
87
inference_id: "openai-completion",
9-
messages: [
10-
{
11-
role: "assistant",
12-
content: "Let's find out what the weather is",
13-
tool_calls: [
14-
{
15-
id: "call_KcAjWtAww20AihPHphUh46Gd",
16-
type: "function",
17-
function: {
18-
name: "get_current_weather",
19-
arguments: '{"location":"Boston, MA"}',
8+
chat_completion_request: {
9+
messages: [
10+
{
11+
role: "assistant",
12+
content: "Let's find out what the weather is",
13+
tool_calls: [
14+
{
15+
id: "call_KcAjWtAww20AihPHphUh46Gd",
16+
type: "function",
17+
function: {
18+
name: "get_current_weather",
19+
arguments: '{"location":"Boston, MA"}',
20+
},
2021
},
21-
},
22-
],
23-
},
24-
{
25-
role: "tool",
26-
content: "The weather is cold",
27-
tool_call_id: "call_KcAjWtAww20AihPHphUh46Gd",
28-
},
29-
],
22+
],
23+
},
24+
{
25+
role: "tool",
26+
content: "The weather is cold",
27+
tool_call_id: "call_KcAjWtAww20AihPHphUh46Gd",
28+
},
29+
],
30+
},
3031
});
3132
console.log(response);
3233
----

docs/doc_examples/b45a8c6fc746e9c90fd181e69a605fad.asciidoc

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,7 @@
33

44
[source, js]
55
----
6-
const response = await client.inference.inference({
7-
task_type: "completion",
6+
const response = await client.inference.completion({
87
inference_id: "openai_chat_completions",
98
input: "What is Elastic?",
109
});

docs/doc_examples/f1b24217b1d9ba6ea5e4fa6e6f412022.asciidoc

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,7 @@
33

44
[source, js]
55
----
6-
const response = await client.inference.inference({
7-
task_type: "rerank",
6+
const response = await client.inference.rerank({
87
inference_id: "cohere_rerank",
98
input: ["luke", "like", "leia", "chewy", "r2d2", "star", "wars"],
109
query: "star wars main character",

docs/reference/api-reference.md

Lines changed: 0 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -7553,23 +7553,6 @@ client.inference.get({ ... })
75537553
- **`task_type` (Optional, Enum("sparse_embedding" | "text_embedding" | "rerank" | "completion" | "chat_completion"))**: The task type
75547554
- **`inference_id` (Optional, string)**: The inference Id
75557555

7556-
## client.inference.postEisChatCompletion [_inference.post_eis_chat_completion]
7557-
Perform a chat completion task through the Elastic Inference Service (EIS).
7558-
7559-
Perform a chat completion inference task with the `elastic` service.
7560-
7561-
[Endpoint documentation](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-post-eis-chat-completion)
7562-
7563-
```ts
7564-
client.inference.postEisChatCompletion({ eis_inference_id })
7565-
```
7566-
7567-
### Arguments [_arguments_inference.post_eis_chat_completion]
7568-
7569-
#### Request (object) [_request_inference.post_eis_chat_completion]
7570-
- **`eis_inference_id` (string)**: The unique identifier of the inference endpoint.
7571-
- **`chat_completion_request` (Optional, { messages, model, max_completion_tokens, stop, temperature, tool_choice, tools, top_p })**
7572-
75737556
## client.inference.put [_inference.put]
75747557
Create an inference endpoint.
75757558
When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
@@ -7776,26 +7759,6 @@ These settings are specific to the `cohere` service.
77767759
- **`task_settings` (Optional, { input_type, return_documents, top_n, truncate })**: Settings to configure the inference task.
77777760
These settings are specific to the task type you specified.
77787761

7779-
## client.inference.putEis [_inference.put_eis]
7780-
Create an Elastic Inference Service (EIS) inference endpoint.
7781-
7782-
Create an inference endpoint to perform an inference task through the Elastic Inference Service (EIS).
7783-
7784-
[Endpoint documentation](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-eis)
7785-
7786-
```ts
7787-
client.inference.putEis({ task_type, eis_inference_id, service, service_settings })
7788-
```
7789-
7790-
### Arguments [_arguments_inference.put_eis]
7791-
7792-
#### Request (object) [_request_inference.put_eis]
7793-
- **`task_type` (Enum("chat_completion"))**: The type of the inference task that the model will perform.
7794-
NOTE: The `chat_completion` task type only supports streaming and only through the _stream API.
7795-
- **`eis_inference_id` (string)**: The unique identifier of the inference endpoint.
7796-
- **`service` (Enum("elastic"))**: The type of service supported for the specified task type. In this case, `elastic`.
7797-
- **`service_settings` ({ model_id, rate_limit })**: Settings used to install the inference model. These settings are specific to the `elastic` service.
7798-
77997762
## client.inference.putElasticsearch [_inference.put_elasticsearch]
78007763
Create an Elasticsearch inference endpoint.
78017764

0 commit comments

Comments
 (0)