When you think of Autodesk, you don't think "marketing automation" first. You think buildings, products, films—the tools used to design and make the physical and digital infrastructure of modern life.
This is a company whose entire brand is built on engineering rigour, precision, and deep institutional expertise. So when Allen Roh, who leads lifecycle and marketing automation at Autodesk, started looking at AI agents, he wasn't just choosing a new tool. He was introducing a new kind of "workforce" into a motion that touches millions of users and significant revenue.
His challenge was simple to describe and hard to execute: How do you get the speed of AI agents without compromising the standard that defines Autodesk?
The Challenge: Value at Scale, Not Spam at Scale
Allen's team owns lifecycle and marketing automation at Autodesk. In practice, that means tracking intent signals across email, chatbots, and webinars; deciding the right message and timing for each person; and collaborating closely with GTM, product marketing, ops, BDR, and sales—all while being measured against shared KPIs.
Their mandate: turn intent into action and deliver value through true personalisation at scale.
When Allen first encountered AI agents, his reaction was a mix of genuine excitement and deliberate caution. The potential was obvious—work that would have taken hours could now be done in minutes. For a lifecycle team obsessed with timing and relevance, that kind of speed is a real competitive lever.
But the caution was just as strong. Institutional knowledge and industry expertise are core assets at Autodesk—not marketing slogans, but the moat. Anything AI helped generate had to meet those standards.
The rule they landed on: Data first, confidence second. Only once results proved themselves would confidence follow.
The Approach: Start Narrow, Build Deliberately
Autodesk didn't try to automate everything. They started like a good engineering team: with a focused experiment anchored not in a vague objective like "drive more pipeline," but in a specific point in the lifecycle that was both close to revenue and defined by a clear, repeatable job to be done—think trial-to-conversion journeys where success is measurable and impact shows up quickly.
From day one, they treated the AI workforce like a real workforce: set clear goals and a North Star, define the job not just the tool, expect a learning period, and iterate based on performance.
What surprised them was the intensity of the power—and how much process it takes to use that power responsibly. They discovered quickly that agents need to be trained on Autodesk's North Star, tone, and standards, much like a new hire. You can't assume even a good agent "gets it" after one configuration. And as markets and products evolve, prompts age. What worked months ago won't automatically be right today.
Agents weren't "set and forget." They needed to be treated like a product or team member: designed, onboarded, monitored, and updated as the environment changes.
Turning Institutional Knowledge Into a Modular Knowledge Base
One of the strongest parts of Autodesk's approach is how seriously they treat institutional knowledge. Internal playbooks and industry experience aren't just "background context"—they're the core of the company's value.
Their answer was to modularise it. They broke down industry knowledge and best-practice playbooks into modular components, turned those into a shared knowledge base agents could draw from, and built this with other teams:
- Product marketing brought research and understanding of user behaviour
- BD and Sales brought frontline feedback on which messages land
- Data and operations teams brought visibility into system health and quality
- Legal ensured everything stayed compliant
The result wasn't a single "magic prompt." It was a structured, shared knowledge layer that agents could use as a reference—and that humans across teams could improve over time.
Allen is very clear on this: There is no one prompt that fixes everything. Sometimes the right move is to break a single flow into multiple agents that work together, each handling a specific part of the job so that brand voice, tone, and outcomes stay aligned.
The Results: When Data Showed the Lift
So how did they know it was working?
On the demand gen and growth marketing side, they started seeing rising engagement metrics once agents began personalising content 1:1 at scale, pipeline uplift linked directly to journeys the agents were influencing, and qualitative feedback from BDR and sales teams that conversations were smoother because prospects arrived better educated and nurtured.
On the operations side, agents drove efficiency gains that freed people up for more meaningful work rather than repetitive admin.
That combination mattered: the data showed the lift, and the humans closest to customers felt the difference in real conversations.
But even then, they didn't declare themselves "done." As Allen points out, prompts and patterns age. They reached a point where blind test results met their standards, but over time those same prompts would start to lag. That was the moment they leaned into a permanent mindset: treat agents like a product. You never really "finish" them.
From One Working Model to Many Agents
Once they had a working model, they moved to the next phase: scaling what works. Autodesk has multiple product lines and domains—different kinds of design and making across architecture, engineering, construction, and operations.
The question became: how do we extract the successful parts of this first model and apply them to other products, without diluting quality?
Today, the experimentation hasn't stopped—if anything, there's more of it. The difference is that they now scale the working models and experiment deliberately on the edges, rather than treating the entire landscape as one big test.
Collaboration Was Non-Negotiable
No single team could have done this alone. To make agents part of the GTM motion, Autodesk had to build real interlock across the organization—BD and Sales providing live feedback, product marketing shaping messaging, data and ops ensuring systems held up, and legal keeping everything compliant.
Allen is very clear: agents are only as good as the humans guiding them. Once the data showed real lift, sharing that performance with other teams made collaboration easier. People could see the potential in their own metrics.
Allen's Advice: Start Narrow, Stay Disciplined
Asked what he would say to other leaders considering agents, Allen keeps coming back to core principles:
- Start narrow and close to revenue. Pick a single data point with one clear job to be done
- Define a clear North Star. "Increase pipeline" is too broad. "Increase trial-to-conversion for this specific product via this journey" is something you can design and measure
- Let data, not enthusiasm, drive the next step. Look at performance and listen to BD, sales, and CS feedback
- Treat agents like employees. Train them, keep them close to humans who can course-correct, and expect to maintain them
- There is no magic prompt. Modularise your knowledge and be prepared to break flows into multiple agents
- Bring other teams in early. Brand, product marketing, sales, data, ops, legal—they all need to be part of the journey
His final point is simple: the potential is massive, and the work is exciting, not scary—as long as you start focused, stay disciplined, and let the data speak.
Why This Matters
Autodesk didn't adopt AI agents because it was fashionable. They did it because, used correctly, agents help them personalise at a level that would be impossible manually, move faster without diluting their standards, and free humans to do more meaningful, higher-value work.
In a world where many enterprises still see AI as either a risk to be contained or a toy to be demoed, Autodesk is treating it as what it will inevitably become for serious companies: a governed, evolving workforce—built on the same institutional knowledge that made them successful in the first place.
That's the future of work being built right now.
By Daniel Vassilev, Co-Founder & Co-CEO, Relevance AI
Watch the full interview premiering at Agents@Work, December 10 2025.


