Advanced Settings LLM Step

Advanced settings for LLM step

Each step in your tool can be fine-tuned using advanced settings. To access these, click on the gear icon associated with any step in your tool's build interface.

To demo this, let’s break down the most important settings for the LLM Step:

Key settings:

Set a fallback model: This serves as a backup in case the primary model fails, ensuring your tool's reliability.

System prompt: While the main prompt in the LLM step defines the specific task or query, the system prompt sets the overall context and behavior of the model. Use the system prompt to define the AI's role, set constraints, or provide high-level instructions that apply to all interactions within this step.

LLM Validators: Validators are powerful tools to ensure the output from your LLM steps meets specific criteria.

Here are the key validator options:

  • Is Valid JSON:
    • This validator checks if the LLM's output is properly formatted JSON. This is useful when you expect structured data from the LLM and want to ensure it can be parsed correctly.
  • Matches JSONSchema:
    • This goes a step further than the valid JSON check by verifying if the JSON structure matches a predefined schema. It’s ideal for ensuring the LLM output contains all required fields and data types.
  • Matches a Regex:
    • This allows you to input a regular expression (regex) pattern. The validator checks if the LLM output matches this pattern. It’s useful for ensuring specific formats (e.g., email addresses, phone numbers) or for identifying particular phrases or structures in the output.

Using these validators can significantly improve the reliability and consistency of your AI tools. They act as a quality control mechanism, ensuring that the output from each step meets your specified criteria before proceeding to the next step or returning results.

Tool Builder

0:00