LLM Tool step
Use a LLM as a step in your Tools
The LLM Tool Step is a versatile Tool step in Relevance AI that allows you to inject the power of large language models into your tools. It supports complex reasoning, classification, summarization, and text generation tasks — tailored to your use case.
Add the LLM Tool step to your Tool
You can add the LLM Tool step to your Tool by:
- Creating a new Tool, then clicking ‘LLM’ under Steps
- Click ‘Expand’ to see the full Tool step
- Add the Prompt you want to the Prompt field
- Choose the model you want to use
- Click the Settings icon if you want to access Advanced Settings
Advanced Settings
If you’re a pro at using LLMs, you can also access these Advanced Settings.
Fallback Model
You can select a Fallback Model if the model you’ve selected fails for any reason. This is a good setting to select to make sure your Tool works!
System prompt
A system prompt is composed of notes, instruction and guides normally guiding AI to assume a certain role, or to follow a specific format, or limitations.
An example of a System prompt would be: You are an expert on the solar system. Answer the following questions in a concise and informative manner
.
Temperature
Temperature is a hyperparameter, ranging in (0,1), that affects the randomness (sometimes referred to as creativity) of the LLMs’ response.
Higher randomness/creativity/diversity is expected of higher temperature values.
However, responses might also lose the right context.
Conversation history
Click on + Add row
and you can add lines of conversation taking place between a “user” and “ai”.
Conversation history is useful in conversational agents and help AI to know more about the situation.
Thinking / Reasoning Configuration
For OpenAI, Claude and Gemini models, you can select a thinking configuration. Enabling this will make the model ‘think’ more before answering, which is useful for complex problems.
Reasoning / thinking tokens are charged as output tokens and will be added to the cost of your LLM Tool step.
Force response format
For certain models such as OpenAI, response format can be forced using this field.
Common errors
Prompt is too long. Please reduce prompt in length.
Prompt is too long. Please reduce prompt in length.
The error message below indicate that the provided prompt includes more tokens than what the choses model allows. To resolve the issue, you can use a model that supports higher number of tokens.
Too large data
Too large data
When using Relevance to handle large inputs by selecting the most relevant entries, if the input data is too large, you need to upload it as a dataset use it as a knowledge in your Tool. Maximum size for non-knowledge data is 131,072 tokens (~90kb).
Rate limit
Rate limit
This error happens when the used API key is set to a different rate limit compared to what Relevance uses by default. Trying again with different intervals of pause helps with this issue.
Temperature errors
Temperature errors
There is a temperature parameter under LLM advanced settings. The below error occurs if the entered value is out of the accepted (0,1) range.