ChatCompletion
This plugin is currently in beta. While it is considered safe for use, please be aware that its API could change in ways that are not compatible with earlier versions in future releases, or it might become unsupported.
Create a Retrieval Augmented Generation (RAG) pipeline.
type: "io.kestra.plugin.langchain4j.rag.ChatCompletion"
Chat with your data using Retrieval Augmented Generation (RAG). This flow will index documents and use the RAG Chat task to interact with your data using natural language prompts. The flow contrasts prompts to LLM with and without RAG. The Chat with RAG retrieves embeddings stored in the KV Store and provides a response grounded in data rather than hallucinating. WARNING: the KV embedding store is for quick prototyping only, as it stores the embedding vectors in Kestra's KV store an loads them all into memory.
id: rag
namespace: company.team
tasks:
- id: ingest
type: io.kestra.plugin.langchain4j.rag.IngestDocument
provider:
type: io.kestra.plugin.langchain4j.provider.GoogleGemini
modelName: gemini-embedding-exp-03-07
apiKey: "{{ secret('GEMINI_API_KEY') }}"
embeddings:
type: io.kestra.plugin.langchain4j.embeddings.KestraKVStore
drop: true
fromExternalURLs:
- https://raw.githubusercontent.com/kestra-io/docs/refs/heads/main/content/blogs/release-0-22.md
- id: chat_without_rag
type: io.kestra.plugin.langchain4j.ChatCompletion
provider:
type: io.kestra.plugin.langchain4j.provider.GoogleGemini
modelName: gemini-2.0-flash
apiKey: "{{ secret('GEMINI_API_KEY') }}"
messages:
- type: user
content: Which features were released in Kestra 0.22?
- id: chat_with_rag
type: io.kestra.plugin.langchain4j.rag.ChatCompletion
chatProvider:
type: io.kestra.plugin.langchain4j.provider.GoogleGemini
modelName: gemini-2.0-flash
apiKey: "{{ secret('GEMINI_API_KEY') }}"
embeddingProvider:
type: io.kestra.plugin.langchain4j.provider.GoogleGemini
modelName: gemini-embedding-exp-03-07
apiKey: "{{ secret('GEMINI_API_KEY') }}"
embeddings:
type: io.kestra.plugin.langchain4j.embeddings.KestraKVStore
prompt: Which features were released in Kestra 0.22?
Chat with your data using Retrieval Augmented Generation (RAG) and a WebSearch content retriever. The Chat with RAG retrieves contents from a WebSearch client and provides a response grounded in data rather than hallucinating.
id: rag
namespace: company.team
tasks:
- id: chat_with_rag_and_websearch_content_retriever
type: io.kestra.plugin.langchain4j.rag.ChatCompletion
chatProvider:
type: io.kestra.plugin.langchain4j.provider.GoogleGemini
modelName: gemini-2.0-flash
apiKey: "{{ secret('GEMINI_API_KEY') }}"
contentRetrievers:
- type: io.kestra.plugin.langchain4j.retriever.GoogleCustomWebSearch
apiKey: "{{ secret('GOOGLE_SEARCH_API_KEY') }}"
csi: "{{ secret('GOOGLE_SEARCH_CSI') }}"
prompt: What is the latest release of Kestra?
Chat with your data using Retrieval Augmented Generation (RAG) and an additional WebSearch tool. This flow will index documents and use the RAG Chat task to interact with your data using natural language prompts. The flow contrasts prompts to LLM with and without RAG. The Chat with RAG retrieves embeddings stored in the KV Store and provides a response grounded in data rather than hallucinating. It may also include results from a web search engine if using the provided tool. WARNING: the KV embedding store is for quick prototyping only, as it stores the embedding vectors in Kestra's KV store an loads them all into memory.
id: rag
namespace: company.team
tasks:
- id: ingest
type: io.kestra.plugin.langchain4j.rag.IngestDocument
provider:
type: io.kestra.plugin.langchain4j.provider.GoogleGemini
modelName: gemini-embedding-exp-03-07
apiKey: "{{ secret('GEMINI_API_KEY') }}"
embeddings:
type: io.kestra.plugin.langchain4j.embeddings.KestraKVStore
drop: true
fromExternalURLs:
- https://raw.githubusercontent.com/kestra-io/docs/refs/heads/main/content/blogs/release-0-22.md
- id: chat_with_rag_and_tool
type: io.kestra.plugin.langchain4j.rag.ChatCompletion
chatProvider:
type: io.kestra.plugin.langchain4j.provider.GoogleGemini
modelName: gemini-2.0-flash
apiKey: "{{ secret('GEMINI_API_KEY') }}"
embeddingProvider:
type: io.kestra.plugin.langchain4j.provider.GoogleGemini
modelName: gemini-embedding-exp-03-07
apiKey: "{{ secret('GEMINI_API_KEY') }}"
embeddings:
type: io.kestra.plugin.langchain4j.embeddings.KestraKVStore
tools:
- type: io.kestra.plugin.langchain4j.tool.GoogleCustomWebSearch
apiKey: "{{ secret('GOOGLE_SEARCH_API_KEY') }}"
csi: "{{ secret('GOOGLE_SEARCH_CSI') }}"
prompt: What is the latest release of Kestra?
NO
Chat Model Provider
NO
{
"maxResults": 3,
"minScore": 0
}
Content Retriever Configuration
NO
{}
Chat configuration
YES
Additional content retrievers
Some content retrievers like WebSearch can be used also as tools, but using them as content retrievers will make them always used whereas tools are only used when the LLM decided to.
NO
Embedding Store Model Provider
Optional, if not set, the embedding model will be created by the chatModelProvider
. In this case, be sure that the chatModelProvider
supports embeddings.
NO
Embedding Store Provider
Optional if at least one contentRetrievers
is provided
YES
Text prompt
The input prompt for the language model
YES
Tools that the LLM may use to augment its response
Generated text completion
The result of the text completion
STOP
LENGTH
TOOL_EXECUTION
CONTENT_FILTER
OTHER
Finish reason
Token usage
NO
3
The maximum number of results from the embedding store.
NO
0
The minimum score, ranging from 0 to 1 (inclusive). Only embeddings with a score >= minScore will be returned.
YES
Endpoint URL
YES
Project location
YES
Model name
YES
Project ID
NO
YES
API endpoint
The Azure OpenAI endpoint in the format: https://{resource}.openai.azure.com/
YES
Model name
NO
YES
API Key
YES
Client ID
YES
Client secret
YES
API version
YES
Tenant ID
YES
API Key
YES
Model name
NO
YES
https://api.deepseek.com/v1
API base URL
YES
API Key
NO
NO
3
Maximum number of results to return
YES
1
List of HTTP ElasticSearch servers.
Must be an URI like https://elasticsearch.com: 9200
with scheme and port.
NO
Basic auth configuration.
YES
List of HTTP headers to be send on every request.
Must be a string with key value separated with :
, ex: Authorization: Token XYZ
.
YES
Sets the path's prefix for every request used by the HTTP client.
For example, if this is set to /my/path
, then any client request will become /my/path/
+ endpoint.
In essence, every request's endpoint is prefixed by this pathPrefix
.
The path prefix is useful for when ElasticSearch is behind a proxy that provides a base path or a proxy that requires all paths to start with '/'; it is not intended for other purposes and it should not be supplied in other scenarios.
NO
Whether the REST client should return any response containing at least one warning header as a failure.
NO
Trust all SSL CA certificates.
Use this if the server is using a self signed SSL certificate.
YES
API Key
YES
Model name
NO
YES
API Key
YES
Model name
NO
YES
API base URL
YES
API Key
NO
YES
Model endpoint
YES
Model name
NO
YES
Basic auth password.
YES
Basic auth username.
NO
seed
NO
Temperature
NO
topK
NO
topP
NO
YES
{{flow.id}}-embedding-store
The name of the K/V entry to use
YES
API Key
YES
Model name
NO
YES
AWS Access Key ID
YES
Model name
YES
AWS Secret Access Key
NO
YES
COHERE
COHERE
TITAN
Amazon Bedrock Embedding Model Type
YES
The database name
YES
The database server host
YES
The database password
NO
The database server port
YES
The table to store embeddings in
NO
YES
The database user
NO
false
Whether to use use an IVFFlat index
An IVFFlat index divides vectors into lists, and then searches a subset of those lists closest to the query vector. It has faster build times and uses less memory than HNSW but has lower query performance (in terms of speed-recall tradeoff).
YES
API Key
YES
Model name
NO
YES
API base URL
YES
API Key
YES
API Key
NO
NO
3
Maximum number of results to return
YES
The MCP client command, as a list of command parts.
NO
YES
Environment variables
YES
API Key
YES
API Key
NO
YES
SSE URL to the MCP server
NO
YES
duration
Connection timeout
NO
NO
NO
NO
YES
The name of the index to store embeddings
NO