Skip to main content
Programmatic GTM is the new way to build your agents for GTM in Relevance AI. Instead of clicking through a UI, you build, test, and iterate on agents, tools, and workforces directly from your coding environment — using natural language.

What you can do

Once connected, your AI client gets full access to your Relevance AI project. This goes far beyond running existing tools — you can build and manage your entire GTM infrastructure from clients like Claude Code.

Create agents

Design and configure new agents, set their instructions, assign tools, and configure triggers.

Build tools

Create new tools with custom steps, inputs, and outputs.

Set up workforces

Build multi-agent workflows with triggers, conditions, and agent-to-agent handoffs.

Trigger agents

Start conversations with your agents and get responses.

Execute tools

Run any of your Relevance AI tools directly from your AI client.

Troubleshoot agents

Diagnose issues with your agents by reviewing conversation logs and tool outputs.

Refine agents

Iterate on agent instructions, tool configurations, and behaviour based on real results.

Evaluate runs

Review previous agent runs, identify failures, and improve performance over time.

Update configurations

Modify agent instructions, tool settings, and workflow logic.

Use cases

Use Programmatic GTM to create and configure agents end-to-end from your AI client. Describe what you want in natural language and let your AI client handle the setup.Example prompts:
“Create a new agent called ‘Customer Support Bot’ that answers questions using our FAQ knowledge base. Give it a friendly tone and make sure it escalates to a human when it can’t answer.”
“Build me a BDR agent that qualifies inbound leads from HubSpot. It should check the company size and industry, then send a personalised follow-up email via Gmail.”
“Set up an agent that monitors our Slack support channel, categorises messages by urgency, and assigns them to the right team member.”
“Create an agent with a scheduled trigger that runs every morning, pulls yesterday’s sales data from Google Sheets, and posts a summary to Slack.”

Best practices

1

Plan before you build

Before asking your AI client to create or modify anything, start by having it plan the work first. In Claude Code, you can type /plan to enter plan mode — this lets you and Claude align on the approach before any changes are made.
Instead of jumping straight to “Build me a support agent”, start with “Let’s plan a support agent that handles inbound Slack messages. What tools will it need? What should the escalation flow look like?” — then review the plan and tell Claude to execute it.
2

Review before you approve

When your AI client proposes changes — like updating an agent’s instructions or modifying a tool — read through what it’s about to do before confirming. This is especially important for agents that are already live and handling real conversations.
3

Use conversation history for context

When troubleshooting or refining an agent, ask your AI client to pull recent conversation logs first. This gives it real context to work with rather than guessing.
Prompts like “Look at the last 10 conversations and tell me what’s going wrong” are far more effective than “My agent isn’t working well, fix it”.
4

Test with real scenarios

After building or updating an agent, trigger a test conversation to see how it actually behaves. Don’t just review the configuration — run it. Ask your AI client to “Send a test message to my Support Agent asking about refund policies” and review the response.
5

Work across multiple projects deliberately

If you have separate projects for development and production, connect to both via separate MCP entries. Build and test in your dev project, then once you’re happy, recreate or promote the agent in production. This keeps your live agents safe while you experiment.

Frequently asked questions (FAQs)

Programmatic GTM lets you build, manage, and iterate on your Relevance AI agents and tools directly from AI-powered coding environments like Claude Code, Cursor, or VS Code — instead of using the Relevance AI web interface. You describe what you want in natural language and your AI client handles the rest.
The Model Context Protocol (MCP) is an open standard that allows AI clients to connect to external tools and data sources. It provides a standardized way for AI assistants to access your Relevance AI workspace. Programmatic GTM is built on top of MCP.
No. Claude Code with the Relevance AI plugin provides the richest experience, but you can use any MCP-compatible client — Claude Desktop, ChatGPT, Cursor, VS Code, Windsurf, and more. See the MCP Server page for all supported clients.
The MCP server and Claude Code plugin are free. You will be billed for any Relevance AI usage (agent runs, tool executions, etc.) according to your plan.
Yes. You can connect to the Relevance AI MCP server from as many clients as you like simultaneously. Each client authenticates independently.
Authentication tokens may expire after a period of inactivity. If you are prompted to re-authenticate, simply follow the login flow again.
The MCP server exposes the tools and agents available in the project you authenticated against. To control access, organize your tools across different projects and authenticate each connection to the appropriate project.