In this interview, Allen Roh, Senior Marketing Manager, Growth and Activation at Autodesk, explains how his team adopted AI agents across Autodesk’s go-to-market engine. His experience offers a practical look at how large, complex organizations can begin using AI agents in a thoughtful, measurable, and scalable way.
This guide summarises Allen’s most useful frameworks and lessons — designed to help other teams understand where to start, how to assess value, and what structures are needed to adopt AI agents effectively.
1. Start With Curiosity, Balance It With Caution
Allen’s initial reaction to AI agents was enthusiasm for their speed, paired with caution around quality and brand consistency.
Key principles Autodesk maintained from the start
- AI's speed is genuinely transformative
AI agents can produce high-quality, personalised content in seconds — work that traditionally required hours of manual effort.
- But speed cannot override quality
Autodesk’s credibility depends on precise, expert-backed information. Any AI-generated content must uphold the same standards as human-created messaging.
- Validate before trusting
The team did not assume AI outputs were correct. They reviewed early outputs closely, used results to guide next steps, and scaled only when data showed the agent met Autodesk’s expectations.
2. Begin With Experimentation — But With Structure
Autodesk didn’t aim for broad adoption at the start. Instead, the team used experimentation to learn quickly and identify where agents could have the strongest impact.
Their approach
- Start with a narrow, high-value customer moment.
- Identify a single task that AI can meaningfully accelerate.
- Run controlled experiments.
- Use real results — not assumptions — to guide next steps.
3. Define a Clear North Star Metric
A precise, actionable metric ensured the team remained focused. Instead of broad goals like “increase pipeline,” Allen chose specific conversion moments tied to revenue.
Effective examples
- “Increase trial-to-conversion for Product X”
- “Improve engagement for Segment Y”
This clarity made it easier to measure lift and evaluate whether the agent was succeeding.
4. Treat AI Agents Like a Workforce — Not a Tool
Allen emphasises that agents must be trained, managed, and evaluated much like human employees.
This includes
- Teaching agents Autodesk’s standards and tone.
- Reviewing and improving outputs during early phases.
- Updating logic and content as products and markets evolve.
5. Modularise Institutional Knowledge
To ensure agents operate consistently, Autodesk broke its knowledge base into reusable components.
What this involved
- Documenting best practices and playbooks.
- Turning them into modular building blocks.
- Creating a shared knowledge base accessible to all agents.
- Incorporating insights from product marketing, BD, sales, and operations.
6. Let Data Drive Every Next Step
Allen repeatedly highlights the principle: “data first, confidence second.”
What they monitored
- Engagement increases after agent-personalised content
- BD and sales feedback
- Pipeline and readiness signals
- Operational efficiency metrics
If data showed lift, they scaled. If not, they iterated.
7. Embrace Continuous Training and Iteration
There is no final version of an AI agent. Prompts age, markets change, and models need ongoing refinement.
Autodesk’s view
- Agents must evolve just as GTM playbooks evolve.
- Feedback loops should be built into the process.
- Regular updates are required to maintain quality and relevance.
8. Maintain Tight Cross-Functional Alignment
Autodesk’s success depended heavily on collaboration across the organization.
Key contributors
- Product Marketing
- Business Development and Sales
- Data teams
- Legal and Compliance
- GTM leadership
Shared goals and shared analytics made it easier for teams to support and trust the AI program.
9. Scaling Beyond the First Use Case
Once a single agent delivered measurable value, Autodesk expanded horizontally.
How they scaled
- Reused successful components across new products.
- Conducted targeted experiments only where needed.
- Continued validating each new use case with data.
- Avoided expanding until a working model was proven.
10. Advice for Leaders Beginning Their AI Journey
Allen’s guidance for organizations just getting started:
- Begin with a small, revenue-adjacent use case.
- Let data guide adoption and next steps.
- Involve cross-functional partners early.
- Modularise key knowledge to ensure consistency.
- Stay focused—avoid trying to automate everything at once.
Autodesk’s experience shows that effective AI adoption comes from clear goals, disciplined experimentation, cross-functional alignment, and a commitment to ongoing refinement. With a focused starting point and a data-driven approach, any organization can begin building an AI workforce that delivers meaningful impact.


