Common causes of agent failures
1. Insufficient credits or actions
If your agent is failing, the most common cause is running out of credits or actions.Understanding credits and actions
Relevance AI uses two types of consumption:- Actions: Each time a tool runs, it counts as one action — whether it’s a simple task like sending one email or running a complex workflow with many steps
- Vendor Credits: The cost of running the AI model (LLM costs) and the tools you use
If you’re on the new billing model, you’ll see both Actions and Vendor Credits. If you’re on the old billing model, you’ll only see credits.
How to check your usage
Learn how to monitor your credit and action usage at both the organization level and individual agent level.Solutions for insufficient credits/actions
- Upgrade your plan: Consider upgrading to a higher plan with more credits/actions
- Purchase additional credits: If you have a paid plan, you can purchase extra credits to use before your next renewal
- Bring your own API keys: Use your own API keys to bypass Vendor Credits entirely (available on paid plans only)
2. Tool failures
When agents fail, it’s often because one of their tools is not working properly. Here’s how to troubleshoot tool issues:Step 1: Test the tool independently
- Go to the Tools page
- Find the tool your agent is using
- Click on the tool and go to the “Use” tab
- Run the tool with test inputs to see if it works on its own
- Go to the “Build” tab of the tool
- Run each tool step individually by clicking the play icon next to each step
- Identify which specific step is failing
- Check the step configuration and fix any issues:
- Verify API keys are correct
- Check input formats and data types
- Review step settings and parameters
- Remove or reconfigure problematic steps
Step 2: Check agent-to-tool communication
If the tool works independently but fails when used by the agent:- Verify input data types: Ensure the agent is sending the tool the correct data types (string, array, number, etc.)
- Check input format: Make sure the agent is providing inputs in the expected format
- Review tool input descriptions: Ensure each tool input has a clear description explaining what the agent should provide
- Review tool configuration: Go to your agent’s tools section and check:
- Input configuration mode (Let agent decide, Set manually, or Tool output)
- Whether the agent has the right context to use the tool
- If approval settings are preventing tool execution
Common tool input issues
Data type mismatches:- Tool expects a string but receives an array
- Tool expects a number but receives text
- Tool expects JSON but receives plain text
- Missing required fields
- Incorrect field names
- Wrong data structure
3. Agent configuration issues
Check agent settings
- Review agent prompt: Ensure your agent has clear, specific instructions about when and how to use tools
- Verify tool approval settings: Check if tools are set to “Auto Run”, “Approval Required”, or “Let Agent Decide”
- Check escalation settings: Review retry settings and error handling behavior
Common configuration problems
- Vague prompt: Agent doesn’t understand when to use tools
- Wrong approval mode: Tools require approval but agent doesn’t ask
- Missing context: Agent lacks information needed to use tools effectively
- Conflicting settings: Multiple tools with overlapping purposes
4. Integration and API issues
Check integrations
- Go to Integrations & API Keys in Relevance AI
- Verify all required integrations are connected
- Check if API keys are valid and have proper permissions
- Test integration connections
Common integration problems
- Expired API keys: Update or refresh your API keys
- Insufficient permissions: Ensure API keys have the required scopes
- Rate limiting: Check if you’ve hit API rate limits
- Service outages: Verify the external service is operational
Agent not working as expected
If your agent is running but not producing the results you want, this is often a prompt engineering issue.Understanding agent behavior
Agents are designed to make their own decisions and aren’t end-to-end workflows. They use reasoning to determine the best approach to complete tasks, which means they may not always follow the exact path you expect.Improving your agent prompt
To get better results from your agent:- Be as clear and specific as possible in your agent prompt
- Provide detailed instructions about what you want the agent to do
- Include examples of good responses or behaviors
- Specify the format you want outputs in
- Set clear boundaries about what the agent should and shouldn’t do
Key prompt engineering principles
- Be explicit: Don’t assume the agent will understand implicit requirements
- Use clear language: Avoid ambiguous terms and provide specific criteria
- Provide context: Give the agent relevant background information
- Set expectations: Clearly define what success looks like
- Iterate and test: Refine your prompt based on the agent’s performance
For a deeper understanding of when to use AI agents vs workflows, read our co-founder’s comprehensive guide on LinkedIn.
Model performance issues
If your agent isn’t performing well, consider upgrading to a more capable language model.When to upgrade your model
- Complex reasoning tasks: Advanced models handle multi-step reasoning better
- Tool usage: Some models are better at understanding when and how to use tools
- Large context: If you need to process large amounts of information
- Specialized tasks: Some models excel at specific types of work
Available models and their strengths
OpenAI models:- Advanced conversational abilities and creative writing
- Broad general knowledge and versatility
- Best for: Versatile agents, customer support, brainstorming
- Learn more: OpenAI LLM models
- Strong coding ability and complex task handling
- Excellent at processing multiple file types (PDF, images, audio, video)
- Best for: Software development agents, complex task execution
- Learn more: Google’s Gemini LLM models
- Focused on safe, reliable, and ethical AI responses
- Excellent at reasoning and thoughtful tasks
- Best for: Detailed explanations, structured outputs, sensitive industries
- Learn more: Anthropic LLM models
More advanced models are more expensive but often provide significantly better results. Consider your use case and budget when choosing a model.
If you’re experiencing configuration issues where your agent isn’t working as expected, our support team has limited ability to provide guidance to customers on Team or below as this is considered implementation support, which we only offer to Enterprise customers.If you’re interested in an Enterprise subscription with dedicated implementation support to build agents for your use cases, you can book a demo. You can also reach out to our Partners for implementation support.

