Skip to content

Commit 00dcda1

Browse files
Auto-generated API code (#3090)
1 parent 1e1b0ca commit 00dcda1

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

docs/reference/api-reference.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7621,12 +7621,12 @@ Supports a list of values, such as `open,hidden`.
76217621
Perform chat completion inference.
76227622

76237623
The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation.
7624-
It only works with the `chat_completion` task type for `openai` and `elastic` inference services.
7624+
It only works with the `chat_completion` task type.
76257625

76267626
NOTE: The `chat_completion` task type is only available within the _stream API and only supports streaming.
76277627
The Chat completion inference API and the Stream inference API differ in their response structure and capabilities.
76287628
The Chat completion inference API provides more comprehensive customization options through more fields and function calling support.
7629-
If you use the `openai`, `hugging_face` or the `elastic` service, use the Chat completion inference API.
7629+
To determine whether a given inference service supports this task type, please see the page for that service.
76307630

76317631
[Endpoint documentation](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-unified-inference)
76327632

src/api/api/inference.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -446,7 +446,7 @@ export default class Inference {
446446
}
447447

448448
/**
449-
* Perform chat completion inference. The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation. It only works with the `chat_completion` task type for `openai` and `elastic` inference services. NOTE: The `chat_completion` task type is only available within the _stream API and only supports streaming. The Chat completion inference API and the Stream inference API differ in their response structure and capabilities. The Chat completion inference API provides more comprehensive customization options through more fields and function calling support. If you use the `openai`, `hugging_face` or the `elastic` service, use the Chat completion inference API.
449+
* Perform chat completion inference. The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation. It only works with the `chat_completion` task type. NOTE: The `chat_completion` task type is only available within the _stream API and only supports streaming. The Chat completion inference API and the Stream inference API differ in their response structure and capabilities. The Chat completion inference API provides more comprehensive customization options through more fields and function calling support. To determine whether a given inference service supports this task type, please see the page for that service.
450450
* @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-unified-inference | Elasticsearch API documentation}
451451
*/
452452
async chatCompletionUnified (this: That, params: T.InferenceChatCompletionUnifiedRequest, options?: TransportRequestOptionsWithOutMeta): Promise<T.InferenceChatCompletionUnifiedResponse>

0 commit comments

Comments
 (0)