Over the past few weeks, I’ve been tasked with developing a strategy to track and measure AI usage within my engineering team. As an AI-curious leader, I expected to focus on tools and dashboards—but what I quickly realized is that successful measurement starts with asking the right questions.
Along the way, I leaned on a few resources that helped shape my thinking:
- Pete Hodgson blog post: Leading Your Engineers to an AI-Assisted Future: A practical framework for rolling out AI in phases—starting with experimentation, then adoption, then impact.
- The Pragmatic Engineer blog post: A New Way to Measure Developer Productivity: An interview with the creators of a new framework, DevEx.
- The Pragmatic Engineer Youtube Interview: Measuring the Impact of AI on Software Engineering with DX CTO Laura Tacho: Insight into measuring developer productivity and how to add AI to the mix.
These perspectives, combined with what I’ve observed within my own team, have led me to a few early takeaways about measuring and leading AI adoption in engineering.
1. Know What to Measure And What Not to Measure
Before tracking anything, it’s crucial to define why you are measuring in the first place. My goal is for AI adoption to accelerate my team’s impact, not create metrics for the sake of having them or unnecessary pressure.
Some tempting but misleading metrics include:
- Lines of code generated: Quantity doesn’t equal quality, and AI can produce a lot of code that never makes it to production.
- PR acceptance rates: Engineers using AI may iterate differently, and rejection or rework doesn’t necessarily indicate a negative outcome.
These types of metrics beg the question: If output increases, but there is also a decrease in maintainability, reliability, and quality, what did we really gain? I do believe that these metrics may have some value in terms of adoption, but they lack in terms of tracking the overall impact of AI on your team and company. Instead, focus on metrics tied to actual outcomes: efficiency, quality, and developer experience.
2. Use a Combination of Self-Reported and System Metrics
No single metric tells the full story of AI’s impact. I’ve gathered from discussions that combining:
- Self-reported metrics: Time saved, confidence in AI-assisted work, and perceived productivity gains.
- System or workflow metrics: Cycle time, bug rates, time to review, time-to-merge, and time to market.
…creates a much more accurate picture.
Self-reports capture perception and adoption sentiment, while workflow metrics show actual impact. Together, they help teams understand whether AI is really moving the needle. DX has a breakdown that goes over this further in one of their free guides from their website.
3. Guide Teams Through Phased AI Adoption
Introducing AI to your engineers isn’t just a tooling change—it’s a culture shift. I see it happening in three phases:
- Experimentation: Encourage safe, low-stakes exploration of AI tools.
- Adoption: Incorporate AI into workflows where it clearly adds value.
- Impact: Systematically measure and optimize how AI accelerates delivery and improves outcomes.
Skipping phases often leads to frustration or misuse. Teams need time to adapt, and leaders need to create space for that learning curve.
4. Build a Culture of Learning and Experimentation
AI will continue to evolve, and so will its role on engineering teams. Organizational support—through time, training, and encouragement—is crucial.
Engineers are more likely to embrace AI if:
- They can experiment without fear of failure
- They see clear examples of success
- Leadership recognizes and champions creative adoption
This culture not only drives adoption but also helps prevent misuse, like overproducing features or bloating codebases just because “AI made it easy.”
5. Focus on Real Impact, Not Just Output
AI is great at increasing raw output—but more code or more features doesn’t automatically mean better products.
Two things I’m keeping in mind:
- Time to market matters: AI should help teams deliver value faster.
- Feature bloat is a risk: Just because you can ship more features doesn’t mean you should.
The goal is better outcomes, not just more artifacts.
6. Recognize Where AI Excels
Even in the early days, some patterns are emerging. After having some discussions with the team about how AI is providing value to their workflows, AI shines in areas like:
- Refactoring and simplifying existing code
- Generating tests
- Writing or improving documentation
- Debugging and troubleshooting
- Composing complex queries
- Learning new programming techniques on the fly
By aligning AI use with these strengths, teams can capture immediate value while building confidence for deeper adoption.
Final Thoughts
Measuring AI usage isn’t about counting outputs, it’s about understanding how AI changes the way we work and whether it delivers meaningful outcomes. For my team, the journey is just beginning, but these early insights are shaping how we’ll track success, foster adoption, and avoid common pitfalls.
If you’re leading your team into an AI-assisted future, start with clear intentions, enable thoughtful experimentation, and focus on real impact over vanity metrics.