Handling large content
In an LLM component and under “Advanced options”, you can see a section forHow to handle too much content
.
All variables that are included in the prompt should be listed in this section with an Edit
button in the front.

- Summarize the content
- Select the most relevant content to the query/goal (Vector search)
Edit
, you will access the options.

Screenshot of the modal on how to handle too much text
Summarize
This option reads the provided content and produce a summary. Under advanced setting you can provide a prompt for better and more guided summarization. For example you can note the objective or goal of the analysis. You can also specify the large language model you wish to use for summarization.
Most relevant data
This option will run a semantic search through the data to extract the most relevant parts to the provided query.
-
Query:
By default, we extract the query from the original prompt. However, it is highly recommended to
set a custom value for the search query. You can type in a query or use
{{}}
to include a variable, for example{{ question }}
or{{ search_query }}
. - Query type: You can select between vector and keyword search
- Columns By default, all columns from the row of data are included in the results. However, you can specify a subset of columns in order to exclude unnecessary data.
- Page_size This parameter indicates the number of matching entries to be fetched as the search results. It is set to 100 by default.