Inference
- class elasticsearch.client.InferenceClient
To use this client, access
client.inferencefrom anElasticsearchclient. For example:from elasticsearch import Elasticsearch # Create the client instance client = Elasticsearch(...) # Use the inference client client.inference.<method>(...)
- completion(*, inference_id, input=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)
Perform completion inference on the service.
Get responses for completion tasks. This API works only with the completion task type.
IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
This API requires the
monitor_inferencecluster privilege (the built-ininference_adminandinference_userroles grant this privilege).https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-inference
- Parameters:
inference_id (str) – The inference Id
input (str | Sequence[str] | None) – Inference input. Either a string or an array of strings.
task_settings (Any | None) – Task settings for the individual inference request. These settings are specific to the <task_type> you specified and override the task settings specified when initializing the service.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference request to complete.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- delete(*, inference_id, task_type=None, dry_run=None, error_trace=None, filter_path=None, force=None, human=None, pretty=None)
Delete an inference endpoint.
This API requires the manage_inference cluster privilege (the built-in
inference_adminrole grants this privilege).https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-delete
- Parameters:
inference_id (str) – The inference identifier.
task_type (str | Literal['chat_completion', 'completion', 'rerank', 'sparse_embedding', 'text_embedding'] | None) – The task type
dry_run (bool | None) – When true, checks the semantic_text fields and inference processors that reference the endpoint and returns them in a list, but does not delete the endpoint.
force (bool | None) – When true, the inference endpoint is forcefully deleted even if it is still being used by ingest processors or semantic text fields.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- get(*, task_type=None, inference_id=None, error_trace=None, filter_path=None, human=None, pretty=None)
Get an inference endpoint.
This API requires the
monitor_inferencecluster privilege (the built-ininference_adminandinference_userroles grant this privilege).https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-get
- Parameters:
- Return type:
- inference(*, inference_id, input=None, task_type=None, error_trace=None, filter_path=None, human=None, input_type=None, pretty=None, query=None, task_settings=None, timeout=None, body=None)
Perform inference on the service.
This API enables you to use machine learning models to perform specific tasks on data that you provide as an input. It returns a response with the results of the tasks. The inference endpoint you use can perform one specific task that has been defined when the endpoint was created with the create inference API.
For details about using this API with a service, such as Amazon Bedrock, Anthropic, or HuggingFace, refer to the service-specific documentation.
info The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-inference
- Parameters:
inference_id (str) – The unique identifier for the inference endpoint.
input (str | Sequence[str] | None) – The text on which you want to perform the inference task. It can be a single string or an array. > info > Inference endpoints for the completion task type currently only support a single string as input.
task_type (str | Literal['chat_completion', 'completion', 'rerank', 'sparse_embedding', 'text_embedding'] | None) – The type of inference task that the model performs.
input_type (str | None) – Specifies the input data type for the text embedding model. The input_type parameter only applies to Inference Endpoints with the text_embedding task type. Possible values include: * SEARCH * INGEST * CLASSIFICATION * CLUSTERING Not all services support all values. Unsupported values will trigger a validation exception. Accepted values depend on the configured inference service, refer to the relevant service-specific documentation for more info. > info > The input_type parameter specified on the root level of the request body will take precedence over the input_type parameter specified in task_settings.
query (str | None) – The query input, which is required only for the rerank task. It is not required for other tasks.
task_settings (Any | None) – Task settings for the individual inference request. These settings are specific to the task type you specified and override the task settings specified when initializing the service.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – The amount of time to wait for the inference request to complete.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put(*, inference_id, inference_config=None, body=None, task_type=None, error_trace=None, filter_path=None, human=None, pretty=None, timeout=None)
Create an inference endpoint.
IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
The following integrations are available through the inference API. You can find the available task types next to the integration name:
- AI21 (
chat_completion,completion) - AlibabaCloud AI Search (
completion,rerank,sparse_embedding,text_embedding) - Amazon Bedrock (
completion,text_embedding) - Amazon SageMaker (
chat_completion,completion,rerank,sparse_embedding,text_embedding) - Anthropic (
completion) - Azure AI Studio (
completion,rerank,text_embedding) - Azure OpenAI (
chat_completion,completion,text_embedding) - Cohere (
completion,rerank,text_embedding) - DeepSeek (
chat_completion,completion) - Elasticsearch (
rerank,sparse_embedding,text_embedding- this service is for built-in models and models uploaded through Eland) - ELSER (
sparse_embedding) - Google AI Studio (
completion,text_embedding) - Google Vertex AI (
chat_completion,completion,rerank,text_embedding) - Groq (
chat_completion) - Hugging Face (
chat_completion,completion,rerank,text_embedding) - JinaAI (
rerank,text_embedding) - Llama (
chat_completion,completion,text_embedding) - Mistral (
chat_completion,completion,text_embedding) - Nvidia (
chat_completion,completion,text_embedding,rerank) - OpenAI (
chat_completion,completion,text_embedding) - OpenShift AI (
chat_completion,completion,rerank,text_embedding) - VoyageAI (
rerank,text_embedding) - Watsonx inference integration (
text_embedding)
https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put
- Parameters:
inference_id (str) – The inference Id
task_type (str | Literal['chat_completion', 'completion', 'rerank', 'sparse_embedding', 'text_embedding'] | None) – The task type. Refer to the integration list in the API description for the available task types.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- AI21 (
- put_ai21(*, task_type, ai21_inference_id, service=None, service_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, timeout=None, body=None)
Create a AI21 inference endpoint.
Create an inference endpoint to perform an inference task with the
ai21service.https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-ai21
- Parameters:
task_type (str | Literal['chat_completion', 'completion']) – The type of the inference task that the model will perform.
ai21_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['ai21'] | None) – The type of service supported for the specified task type. In this case, ai21.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the ai21 service.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_alibabacloud(*, task_type, alibabacloud_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)
Create an AlibabaCloud AI Search inference endpoint.
Create an inference endpoint to perform an inference task with the
alibabacloud-ai-searchservice.https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-alibabacloud
- Parameters:
task_type (str | Literal['completion', 'rerank', 'sparse_embedding', 'text_embedding']) – The type of the inference task that the model will perform.
alibabacloud_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['alibabacloud-ai-search'] | None) – The type of service supported for the specified task type. In this case, alibabacloud-ai-search.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the alibabacloud-ai-search service.
chunking_settings (Mapping[str, Any] | None) – The chunking configuration object. Applies only to the sparse_embedding or text_embedding task types. Not applicable to the rerank or completion task types.
task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_amazonbedrock(*, task_type, amazonbedrock_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)
Create an Amazon Bedrock inference endpoint.
Create an inference endpoint to perform an inference task with the
amazonbedrockservice.info You need to provide the access and secret keys only once, during the inference model creation. The get inference API does not retrieve your access or secret keys. After creating the inference model, you cannot change the associated key pairs. If you want to use a different access and secret key pair, delete the inference model and recreate it with the same name and the updated keys.
https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-amazonbedrock
- Parameters:
task_type (str | Literal['completion', 'text_embedding']) – The type of the inference task that the model will perform.
amazonbedrock_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['amazonbedrock'] | None) – The type of service supported for the specified task type. In this case, amazonbedrock.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the amazonbedrock service.
chunking_settings (Mapping[str, Any] | None) – The chunking configuration object. Applies only to the text_embedding task type. Not applicable to the completion task type.
task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_amazonsagemaker(*, task_type, amazonsagemaker_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)
Create an Amazon SageMaker inference endpoint.
Create an inference endpoint to perform an inference task with the
amazon_sagemakerservice.https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-amazonsagemaker
- Parameters:
task_type (str | Literal['chat_completion', 'completion', 'rerank', 'sparse_embedding', 'text_embedding']) – The type of the inference task that the model will perform.
amazonsagemaker_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['amazon_sagemaker'] | None) – The type of service supported for the specified task type. In this case, amazon_sagemaker.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the amazon_sagemaker service and service_settings.api you specified.
chunking_settings (Mapping[str, Any] | None) – The chunking configuration object. Applies only to the sparse_embedding or text_embedding task types. Not applicable to the rerank, completion, or chat_completion task types.
task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type and service_settings.api you specified.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_anthropic(*, task_type, anthropic_inference_id, service=None, service_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)
Create an Anthropic inference endpoint.
Create an inference endpoint to perform an inference task with the
anthropicservice.https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-anthropic
- Parameters:
task_type (str | Literal['completion']) – The task type. The only valid task type for the model to perform is completion.
anthropic_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['anthropic'] | None) – The type of service supported for the specified task type. In this case, anthropic.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the anthropic service.
task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_azureaistudio(*, task_type, azureaistudio_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)
Create an Azure AI studio inference endpoint.
Create an inference endpoint to perform an inference task with the
azureaistudioservice.https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-azureaistudio
- Parameters:
task_type (str | Literal['completion', 'rerank', 'text_embedding']) – The type of the inference task that the model will perform.
azureaistudio_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['azureaistudio'] | None) – The type of service supported for the specified task type. In this case, azureaistudio.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the azureaistudio service.
chunking_settings (Mapping[str, Any] | None) – The chunking configuration object. Applies only to the text_embedding task type. Not applicable to the rerank or completion task types.
task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_azureopenai(*, task_type, azureopenai_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)
Create an Azure OpenAI inference endpoint.
Create an inference endpoint to perform an inference task with the
azureopenaiservice.The list of chat completion models that you can choose from in your Azure OpenAI deployment include:
The list of embeddings models that you can choose from in your deployment can be found in the Azure models documentation.
https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-azureopenai
- Parameters:
task_type (str | Literal['chat_completion', 'completion', 'text_embedding']) – The type of the inference task that the model will perform. NOTE: The chat_completion task type only supports streaming and only through the _stream API.
azureopenai_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['azureopenai'] | None) – The type of service supported for the specified task type. In this case, azureopenai.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the azureopenai service.
chunking_settings (Mapping[str, Any] | None) – The chunking configuration object. Applies only to the text_embedding task type. Not applicable to the completion and chat_completion task types.
task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_cohere(*, task_type, cohere_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)
Create a Cohere inference endpoint.
Create an inference endpoint to perform an inference task with the
cohereservice.https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-cohere
- Parameters:
task_type (str | Literal['completion', 'rerank', 'text_embedding']) – The type of the inference task that the model will perform.
cohere_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['cohere'] | None) – The type of service supported for the specified task type. In this case, cohere.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the cohere service.
chunking_settings (Mapping[str, Any] | None) – The chunking configuration object. Applies only to the text_embedding task type. Not applicable to the rerank or completion task type.
task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_contextualai(*, task_type, contextualai_inference_id, service=None, service_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)
Create an Contextual AI inference endpoint.
Create an inference endpoint to perform an inference task with the
contexualaiservice.To review the available
rerankmodels, refer to https://docs.contextual.ai/api-reference/rerank/rerank#body-model.https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-contextualai
- Parameters:
task_type (str | Literal['rerank']) – The type of the inference task that the model will perform.
contextualai_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['contextualai'] | None) – The type of service supported for the specified task type. In this case, contextualai.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the contextualai service.
task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_custom(*, task_type, custom_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, body=None)
Create a custom inference endpoint.
The custom service gives more control over how to interact with external inference services that aren't explicitly supported through dedicated integrations. The custom service gives you the ability to define the headers, url, query parameters, request body, and secrets. The custom service supports the template replacement functionality, which enables you to define a template that can be replaced with the value associated with that key. Templates are portions of a string that start with
${and end with}. The parameterssecret_parametersandtask_settingsare checked for keys for template replacement. Template replacement is supported in therequest,headers,url, andquery_parameters. If the definition (key) is not found for a template, an error message is returned. In case of an endpoint definition like the following:PUT _inference/text_embedding/test-text-embedding { "service": "custom", "service_settings": { "secret_parameters": { "api_key": "<some api key>" }, "url": "...endpoints.huggingface.cloud/v1/embeddings", "headers": { "Authorization": "Bearer ${api_key}", "Content-Type": "application/json" }, "request": "{\"input\": ${input}}", "response": { "json_parser": { "text_embeddings":"$.data[*].embedding[*]" } } } }To replace
${api_key}thesecret_parametersandtask_settingsare checked for a key namedapi_key.info Templates should not be surrounded by quotes.
Pre-defined templates:
${input}refers to the array of input strings that comes from theinputfield of the subsequent inference requests.${input_type}refers to the input type translation values.${query}refers to the query field used specifically for reranking tasks.${top_n}refers to thetop_nfield available when performing rerank requests.${return_documents}refers to thereturn_documentsfield available when performing rerank requests.
https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-custom
- Parameters:
task_type (str | Literal['completion', 'rerank', 'sparse_embedding', 'text_embedding']) – The type of the inference task that the model will perform.
custom_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['custom'] | None) – The type of service supported for the specified task type. In this case, custom.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the custom service.
chunking_settings (Mapping[str, Any] | None) – The chunking configuration object. Applies only to the sparse_embedding or text_embedding task types. Not applicable to the rerank or completion task types.
task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_deepseek(*, task_type, deepseek_inference_id, service=None, service_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, timeout=None, body=None)
Create a DeepSeek inference endpoint.
Create an inference endpoint to perform an inference task with the
deepseekservice.https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-deepseek
- Parameters:
task_type (str | Literal['chat_completion', 'completion']) – The type of the inference task that the model will perform.
deepseek_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['deepseek'] | None) – The type of service supported for the specified task type. In this case, deepseek.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the deepseek service.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_elasticsearch(*, task_type, elasticsearch_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)
Create an Elasticsearch inference endpoint.
Create an inference endpoint to perform an inference task with the
elasticsearchservice.info Your Elasticsearch deployment contains preconfigured ELSER and E5 inference endpoints, you only need to create the enpoints using the API if you want to customize the settings.
If you use the ELSER or the E5 model through the
elasticsearchservice, the API request will automatically download and deploy the model if it isn't downloaded yet.info You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value.
After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for
"state": "fully_allocated"in the response and ensure that the"allocation_count"matches the"target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-elasticsearch
- Parameters:
task_type (str | Literal['rerank', 'sparse_embedding', 'text_embedding']) – The type of the inference task that the model will perform.
elasticsearch_inference_id (str) – The unique identifier of the inference endpoint. The must not match the model_id.
service (str | Literal['elasticsearch'] | None) – The type of service supported for the specified task type. In this case, elasticsearch.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the elasticsearch service.
chunking_settings (Mapping[str, Any] | None) – The chunking configuration object. Applies only to the sparse_embedding and text_embedding task types. Not applicable to the rerank task type.
task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_elser(*, task_type, elser_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, timeout=None, body=None)
Create an ELSER inference endpoint.
Create an inference endpoint to perform an inference task with the
elserservice. You can also deploy ELSER by using the Elasticsearch inference integration.info Your Elasticsearch deployment contains a preconfigured ELSER inference endpoint, you only need to create the enpoint using the API if you want to customize the settings.
The API request will automatically download and deploy the ELSER model if it isn't already downloaded.
info You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value.
After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for
"state": "fully_allocated"in the response and ensure that the"allocation_count"matches the"target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-elser
- Parameters:
task_type (str | Literal['sparse_embedding']) – The type of the inference task that the model will perform.
elser_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['elser'] | None) – The type of service supported for the specified task type. In this case, elser.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the elser service.
chunking_settings (Mapping[str, Any] | None) – The chunking configuration object. Note that for ELSER endpoints, the max_chunk_size may not exceed 300.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_googleaistudio(*, task_type, googleaistudio_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, timeout=None, body=None)
Create an Google AI Studio inference endpoint.
Create an inference endpoint to perform an inference task with the
googleaistudioservice.https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-googleaistudio
- Parameters:
task_type (str | Literal['completion', 'text_embedding']) – The type of the inference task that the model will perform.
googleaistudio_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['googleaistudio'] | None) – The type of service supported for the specified task type. In this case, googleaistudio.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the googleaistudio service.
chunking_settings (Mapping[str, Any] | None) – The chunking configuration object. Applies only to the text_embedding task type. Not applicable to the completion task type.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_googlevertexai(*, task_type, googlevertexai_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)
Create a Google Vertex AI inference endpoint.
Create an inference endpoint to perform an inference task with the
googlevertexaiservice.https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-googlevertexai
- Parameters:
task_type (str | Literal['chat_completion', 'completion', 'rerank', 'text_embedding']) – The type of the inference task that the model will perform.
googlevertexai_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['googlevertexai'] | None) – The type of service supported for the specified task type. In this case, googlevertexai.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the googlevertexai service.
chunking_settings (Mapping[str, Any] | None) – The chunking configuration object. Applies only to the text_embedding task type. Not applicable to the rerank, completion, or chat_completion task types.
task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_groq(*, task_type, groq_inference_id, service=None, service_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, timeout=None, body=None)
Create a Groq inference endpoint.
Create an inference endpoint to perform an inference task with the
groqservice.https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-groq
- Parameters:
task_type (str | Literal['chat_completion']) – The type of the inference task that the model will perform.
groq_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['groq'] | None) – The type of service supported for the specified task type. In this case, groq.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the groq service.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_hugging_face(*, task_type, huggingface_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)
Create a Hugging Face inference endpoint.
Create an inference endpoint to perform an inference task with the
hugging_faceservice. Supported tasks include:text_embedding,completion, andchat_completion.To configure the endpoint, first visit the Hugging Face Inference Endpoints page and create a new endpoint. Select a model that supports the task you intend to use.
For Elastic's
text_embeddingtask: The selected model must support theSentence Embeddingstask. On the new endpoint creation page, select theSentence Embeddingstask under theAdvanced Configurationsection. After the endpoint has initialized, copy the generated endpoint URL. Recommended models fortext_embeddingtask:all-MiniLM-L6-v2all-MiniLM-L12-v2all-mpnet-base-v2e5-base-v2e5-small-v2multilingual-e5-basemultilingual-e5-small
For Elastic's
chat_completionandcompletiontasks: The selected model must support theText Generationtask and expose OpenAI API. HuggingFace supports both serverless and dedicated endpoints forText Generation. When creating dedicated endpoint select theText Generationtask. After the endpoint is initialized (for dedicated) or ready (for serverless), ensure it supports the OpenAI API and includes/v1/chat/completionspart in URL. Then, copy the full endpoint URL for use. Recommended models forchat_completionandcompletiontasks:Mistral-7B-Instruct-v0.2QwQ-32BPhi-3-mini-128k-instruct
For Elastic's
reranktask: The selected model must support thesentence-rankingtask and expose OpenAI API. HuggingFace supports only dedicated (not serverless) endpoints forRerankso far. After the endpoint is initialized, copy the full endpoint URL for use. Tested models forreranktask:bge-reranker-basejina-reranker-v1-turbo-en-GGUF
https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-hugging-face
- Parameters:
task_type (str | Literal['chat_completion', 'completion', 'rerank', 'text_embedding']) – The type of the inference task that the model will perform.
huggingface_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['hugging_face'] | None) – The type of service supported for the specified task type. In this case, hugging_face.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the hugging_face service.
chunking_settings (Mapping[str, Any] | None) – The chunking configuration object. Applies only to the text_embedding task type. Not applicable to the rerank, completion, or chat_completion task types.
task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_jinaai(*, task_type, jinaai_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)
Create an JinaAI inference endpoint.
Create an inference endpoint to perform an inference task with the
jinaaiservice.To review the available
rerankmodels, refer to https://jina.ai/reranker. To review the availabletext_embeddingmodels, refer to the https://jina.ai/embeddings/.https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-jinaai
- Parameters:
task_type (str | Literal['rerank', 'text_embedding']) – The type of the inference task that the model will perform.
jinaai_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['jinaai'] | None) – The type of service supported for the specified task type. In this case, jinaai.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the jinaai service.
chunking_settings (Mapping[str, Any] | None) – The chunking configuration object. Applies only to the text_embedding task type. Not applicable to the rerank task type.
task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_llama(*, task_type, llama_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, timeout=None, body=None)
Create a Llama inference endpoint.
Create an inference endpoint to perform an inference task with the
llamaservice.https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-llama
- Parameters:
task_type (str | Literal['chat_completion', 'completion', 'text_embedding']) – The type of the inference task that the model will perform.
llama_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['llama'] | None) – The type of service supported for the specified task type. In this case, llama.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the llama service.
chunking_settings (Mapping[str, Any] | None) – The chunking configuration object. Applies only to the text_embedding task type. Not applicable to the completion or chat_completion task types.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_mistral(*, task_type, mistral_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, timeout=None, body=None)
Create a Mistral inference endpoint.
Create an inference endpoint to perform an inference task with the
mistralservice.https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-mistral
- Parameters:
task_type (str | Literal['chat_completion', 'completion', 'text_embedding']) – The type of the inference task that the model will perform.
mistral_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['mistral'] | None) – The type of service supported for the specified task type. In this case, mistral.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the mistral service.
chunking_settings (Mapping[str, Any] | None) – The chunking configuration object. Applies only to the text_embedding task type. Not applicable to the completion or chat_completion task types.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_nvidia(*, task_type, nvidia_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)
Create an Nvidia inference endpoint.
Create an inference endpoint to perform an inference task with the
nvidiaservice.https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-nvidia
- Parameters:
task_type (str | Literal['chat_completion', 'completion', 'rerank', 'text_embedding']) – The type of the inference task that the model will perform. NOTE: The chat_completion task type only supports streaming and only through the _stream API.
nvidia_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['nvidia'] | None) – The type of service supported for the specified task type. In this case, nvidia.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the nvidia service.
chunking_settings (Mapping[str, Any] | None) – The chunking configuration object. Applies only to the text_embedding task type. Not applicable to the rerank, completion, or chat_completion task types.
task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. Applies only to the text_embedding task type. Not applicable to the rerank, completion, or chat_completion task types. These settings are specific to the task type you specified.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_openai(*, task_type, openai_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)
Create an OpenAI inference endpoint.
Create an inference endpoint to perform an inference task with the
openaiservice oropenaicompatible APIs.https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-openai
- Parameters:
task_type (str | Literal['chat_completion', 'completion', 'text_embedding']) – The type of the inference task that the model will perform. NOTE: The chat_completion task type only supports streaming and only through the _stream API.
openai_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['openai'] | None) – The type of service supported for the specified task type. In this case, openai.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the openai service.
chunking_settings (Mapping[str, Any] | None) – The chunking configuration object. Applies only to the text_embedding task type. Not applicable to the completion or chat_completion task types.
task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_openshift_ai(*, task_type, openshiftai_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)
Create an OpenShift AI inference endpoint.
Create an inference endpoint to perform an inference task with the
openshift_aiservice.https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-openshift-ai
- Parameters:
task_type (str | Literal['chat_completion', 'completion', 'rerank', 'text_embedding']) – The type of the inference task that the model will perform. NOTE: The chat_completion task type only supports streaming and only through the _stream API.
openshiftai_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['openshift_ai'] | None) – The type of service supported for the specified task type. In this case, openshift_ai.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the openshift_ai service.
chunking_settings (Mapping[str, Any] | None) – The chunking configuration object. Applies only to the text_embedding task type. Not applicable to the rerank, completion, or chat_completion task types.
task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. Applies only to the rerank task type. Not applicable to the text_embedding, completion, or chat_completion task types. These settings are specific to the task type you specified.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_voyageai(*, task_type, voyageai_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)
Create a VoyageAI inference endpoint.
Create an inference endpoint to perform an inference task with the
voyageaiservice.Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-voyageai
- Parameters:
task_type (str | Literal['rerank', 'text_embedding']) – The type of the inference task that the model will perform.
voyageai_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['voyageai'] | None) – The type of service supported for the specified task type. In this case, voyageai.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the voyageai service.
chunking_settings (Mapping[str, Any] | None) – The chunking configuration object. Applies only to the text_embedding task type. Not applicable to the rerank task type.
task_settings (Mapping[str, Any] | None) – Settings to configure the inference task. These settings are specific to the task type you specified.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- put_watsonx(*, task_type, watsonx_inference_id, service=None, service_settings=None, chunking_settings=None, error_trace=None, filter_path=None, human=None, pretty=None, timeout=None, body=None)
Create a Watsonx inference endpoint.
Create an inference endpoint to perform an inference task with the
watsonxaiservice. You need an IBM Cloud Databases for Elasticsearch deployment to use thewatsonxaiinference service. You can provision one through the IBM catalog, the Cloud Databases CLI plug-in, the Cloud Databases API, or Terraform.https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-watsonx
- Parameters:
task_type (str | Literal['chat_completion', 'completion', 'text_embedding']) – The type of the inference task that the model will perform.
watsonx_inference_id (str) – The unique identifier of the inference endpoint.
service (str | Literal['watsonxai'] | None) – The type of service supported for the specified task type. In this case, watsonxai.
service_settings (Mapping[str, Any] | None) – Settings used to install the inference model. These settings are specific to the watsonxai service.
chunking_settings (Mapping[str, Any] | None) – The chunking configuration object. Applies only to the text_embedding task type. Not applicable to the completion or chat_completion task types.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference endpoint to be created.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- rerank(*, inference_id, input=None, query=None, error_trace=None, filter_path=None, human=None, pretty=None, return_documents=None, task_settings=None, timeout=None, top_n=None, body=None)
Perform reranking inference on the service.
https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-inference
- Parameters:
inference_id (str) – The unique identifier for the inference endpoint.
query (str | None) – Query input.
return_documents (bool | None) – Include the document text in the response.
task_settings (Any | None) – Task settings for the individual inference request. These settings are specific to the task type you specified and override the task settings specified when initializing the service.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – The amount of time to wait for the inference request to complete.
top_n (int | None) – Limit the response to the top N documents.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- sparse_embedding(*, inference_id, input=None, error_trace=None, filter_path=None, human=None, pretty=None, task_settings=None, timeout=None, body=None)
Perform sparse embedding inference on the service.
https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-inference
- Parameters:
inference_id (str) – The inference Id
input (str | Sequence[str] | None) – Inference input. Either a string or an array of strings.
task_settings (Any | None) – Task settings for the individual inference request. These settings are specific to the <task_type> you specified and override the task settings specified when initializing the service.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference request to complete.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- text_embedding(*, inference_id, input=None, error_trace=None, filter_path=None, human=None, input_type=None, pretty=None, task_settings=None, timeout=None, body=None)
Perform text embedding inference on the service.
https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-inference
- Parameters:
inference_id (str) – The inference Id
input (str | Sequence[str] | None) – Inference input. Either a string or an array of strings.
input_type (str | None) – The input data type for the text embedding model. Possible values include: * SEARCH * INGEST * CLASSIFICATION * CLUSTERING Not all services support all values. Unsupported values will trigger a validation exception. Accepted values depend on the configured inference service, refer to the relevant service-specific documentation for more info. > info > The input_type parameter specified on the root level of the request body will take precedence over the input_type parameter specified in task_settings.
task_settings (Any | None) – Task settings for the individual inference request. These settings are specific to the <task_type> you specified and override the task settings specified when initializing the service.
timeout (str | Literal[-1] | ~typing.Literal[0] | None) – Specifies the amount of time to wait for the inference request to complete.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type:
- update(*, inference_id, inference_config=None, body=None, task_type=None, error_trace=None, filter_path=None, human=None, pretty=None)
Update an inference endpoint.
Modify
task_settings, secrets (withinservice_settings), ornum_allocationsfor an inference endpoint, depending on the specific endpoint service andtask_type.IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-update
- Parameters:
inference_id (str) – The unique identifier of the inference endpoint.
task_type (str | Literal['chat_completion', 'completion', 'rerank', 'sparse_embedding', 'text_embedding'] | None) – The type of inference task that the model performs.
error_trace (bool | None)
human (bool | None)
pretty (bool | None)
- Return type: