Changelog

< All posts
June 5, 2025
An image of this changelog

Control AI reasoning in the LLM tool step with Configurable Thinking

Configurable Thinking for LLM Tool Step: Enhance Complex Problem Solving

You can now optimize how your AI models approach complex problems with configurable thinking settings in the LLM tool step!

The new advanced thinking configuration option gives you precise control over how thoroughly models process your prompts before responding. This is especially valuable for complex reasoning tasks or any situation where you need more deliberate analysis.

➡️ Choose thinking levels for OpenAI o-series models – Select "low," "medium," or "high" reasoning intensity (default is "medium")

➡️ Set thinking budgets for Claude 3.7+ models – Specify the maximum number of thinking tokens for Claude 3.7 Sonnet, Sonnet 4, and Opus 4

➡️ Configure thinking for Gemini 2.5 Flash – Enable and control thinking by setting a maximum token budget

Note: Mismatched settings are ignored. If you select a thinking option that doesn't match your model (e.g., OpenAI reasoning for Gemini), no thinking will occur.

Also, thinking tokens are charged as output tokens. Be aware of this when setting thinking budgets for cost management.

With configurable thinking, you can fine-tune how your AI models approach different types of problems, balancing thoroughness against speed based on your specific needs.

To access this feature, go to any LLM tool step in your agent workflow, and look for the “Thinking / Reasoning Configuration” settings.

Start optimizing your AI's reasoning capabilities today!

Other improvements

General fixes and UI improvements.

Fixes

The home of the AI workforce

Get started with Relevance AI today