Changelog

< All posts
September 8, 2025
An image of this changelog

Optimize LLM costs with split credit pricing

Split Credit Pricing: More Accurate LLM Costs Based on Your Usage

You can now see and pay for exactly what you use with our new split credit pricing for large language models!

We've implemented tiered pricing that accurately reflects how LLMs charge differently based on token usage. This happens when model pricing differs depending on your usage—for example, Gemini 2.5 Pro costs more if you have >200K input tokens. Currently, this applies to Gemini 2.5 Pro, with Claude Sonnet 4 1M context coming soon with similar pricing structure.

➡️ Enjoy optimized pricing for Gemini 2.5 Pro – Lower rates automatically applied when using <200K input tokens

➡️ See transparent pricing – Clear display of input/output token costs in the model selector

➡️ Understand your credit usage – Detailed breakdown of how credits are calculated at different usage levels

➡️ Make informed model choices – Compare actual costs based on your specific usage patterns

➡️ Prepare for upcoming models – Claude Sonnet 4 with 1M context will use similar split pricing soon

With Split Credit Pricing, you get more accurate billing that reflects your actual usage patterns, helping you optimize costs while still accessing the most powerful AI models.

To view the new pricing details, simply open the LLM model selector when creating or editing an agent.

Start making smarter model choices with transparent, usage-based pricing today!

Other improvements

General fixes and UI improvements

Fixes

The home of the AI workforce

Get started with Relevance AI today