Advanced Knowledge Search
The Advanced knowledge search tool allows for searching knowledge tables with a higher degree of accuracy and precision
Add the ‘Advanced Knowledge Search Tool step to your Tool
You can add the ‘Advanced Knowledge Search (2m context)’ Tool step to your Tool by:
- Creating a new Tool, then searching for the Advanced Knowledge Search (2m context) Tool step.
- Click ‘Expand’ to see the full Tool step.
- Enter your search query in the Query field.
- Select the search type:
- Hybrid (recommended): Combines both semantic and keyword match for the most accurate and well-rounded results.
- Vector: Finds results based on meaning (semantic similarity). Great when you want answers even if the exact words don’t match.
- Keyword: Finds results based on exact word match. Useful when looking for specific terms or phrases.
- Select the Number of results to return.
- Select the Knowledge set to use.
- Select the fields to vectorize - choose which columns from your data should be used for retrieval. By default, all fields are included.
- Select your retrieval postprocessing option after generating your answer:
- None: No extra processing. You’ll get the raw retrieved text as-is.
- Basic: Applies light formatting or cleaning to make the results easier to read.
- Summary: Automatically summarizes the retrieved content to make it shorter and more digestible.
- Markdown: Formats the retrieved text in Markdown (e.g., with bold, headings, lists) to improve readability or prep it for markdown-compatible interfaces.
- Entity Extraction: Pulls out key information from the retrieved content, e.g., names, places, dates, etc.
Advanced settings:
System prompt
The system prompt is the very first message in a conversation that instructs the model on its role or the personality of the AI. Examples can be expert researcher, analyst or tutor.
Temperature
The temperature setting controls how “creative” or “random” the model’s responses will be. Lower temperatures make the model more focused and deterministic, while higher temperatures make it more creative and unpredictable.
LLM model
The model that will be used to generate the response. By default this model is command-r-plus
.
Embedding Model
This is the model used to turn your text into a format the AI can understand and search through effectively. By default this model is embed-english-v3.0
.
embed-multilingual-v3.0
Reranker Model
Improves your search results by reordering them to show the most relevant information first. This is off by default but enabling it will boost performance in most cases.
Chunk Size
Chunking size decides how much text is grouped together at once when the system processes your content. It’s measured in tokens, which are like small pieces of words.
Common values are: [128, 256, 512, 1024]
- Larger chunks means more context and easier to understand the bigger picture—great for complex content.
- Smaller chunks means less context but more precise results—ideal for pinpointing specific information or short facts.
Chunking Strategy
Base
Base
This is the standard chunking method. It simply splits your text into fixed-sized chunks without adding any extra context. It’s fast and works well for straightforward content.
Window
Window
This strategy adds some overlap between chunks, meaning parts of the text are repeated across neighboring chunks. This helps preserve context between sections.
Contextual
Contextual
This method intelligently adjusts how chunks are created based on the structure and meaning of the content. It uses more advanced logic to split text at natural breaks (like headings or sentences).
Use Raw Files
Select this if you want to run retrieval directly on the source file.
Use Vision
Select this if you want to use vision models (OCR) to parse the raw pdf files.
Update Vectors
This option lets you force update the stored representations of your data (called embeddings) using your current settings—like a chunk size, embedding model, or chunking strategy.
Generate Citations
Turn this on to include direct references from your knowledge set during retrieval. This makes responses more accurate and grounded.
Frequently asked questions (FAQs)
What is the difference between Advanced Knowledge Search and [Advanced Knowledge Search (2m context)](/tool/tool-steps/knowledge/advanced-knowledge-search-2m)?
What is the difference between Advanced Knowledge Search and [Advanced Knowledge Search (2m context)](/tool/tool-steps/knowledge/advanced-knowledge-search-2m)?
When should I use Advanced Knowledge Search (2m) instead of the regular one?
When should I use Advanced Knowledge Search (2m) instead of the regular one?
Which one gives better quality results?
Which one gives better quality results?