Measuring AI Code Drift: Utilizing GitHub Metrics for LLM Impact Assessment
analytics emblem

Measuring AI Code Drift: Utilizing GitHub Metrics for LLM Impact Assessment

In my exploration of AI coding tools, I've discovered that while they promise productivity gains, they can also introduce challenges in the software delivery lifecycle. By measuring code quality, batch size, and delivery, rather than just adoption rates, we can gain valuable insights. I share five key metrics derived from GitHub's API that can help teams identify problematic development patterns early on.

Technology AISoftware Development
agility emblem

The Pitfalls of Unstructured Code Generation

Unstructured code generation in large applications can lead to a net decrease in developer productivity even when they think they’re going faster.

It increases the amount of duplicate code, largely identical but slightly changed. It reduces the amount of true refactoring which is a hallmark of issue free code. It creates more lines to do the same thing. More places for things to go wrong. More areas to maintain and understand. All making it ever harder for the code agents to manage their context windows.

That’s one reason why I focus my code sessions in tight Plan-Do-Check-Act loops with retrospection.

Thanks to Google, GitClear, and others for surfacing this data.

Technology code qualitydeveloper productivity
Illusions of AI Code Generation Productivity
analytics emblem

Illusions of AI Code Generation Productivity

Your team’s AI code generation productivity gains may be an illusion.

Industry studies show perceived gains with code generation can result in a net slowdown in delivery. I’ve built GitHub Actions that measure five key metrics:

🔍 Large Commit Percentage (Target: <20%, Found: 46%)   🔍 Sprawling Commit Percentage (Target: <10%, Found: 20%)   🔍 Test-First Discipline Rate (Found: 58% - Good given refactoring)   🔍 Average Files Changed per Commit (Target: <5, Found: 6.4)   🔍 Average Lines Changed per Commit (Target: <100, Found: 9,053!!)

The article explains the metrics, shows results from my project, and links to GitHub action code.

#AI #SoftwareDevelopment #TechnicalDebt #EngineeringLeadership #CodeQuality

Technology AISoftwareDevelopment
analytics emblem

AI's Impact on Productivity and Delivery

Key Google DORA 2024 findings relating to AI:

AI increases in individual productivity are not yet leading to an improvement in overall delivery.

The researchers surmise this is because of increased batch size. That’s because they measure code quality as improving. But other research indicates code quality issues related to a less true refactoring and more copy/paste with changes code cloning.

AI increases team efficiency but not yet product success.

Researchers point to a correlation between high performing teams and business success and expect outcomes will improve as delivery efficiency increases. However, they are not measuring the alignment of increased code production to customer needs. This raises the possibility that the developer-as-bottleneck assumption is flawed - if product discovery and customer validation are constraints, improved velocity could just accelerate delivery of the wrong things.

Technology AIProductivity
learning emblem

Human Capital and AI Collaboration

AI Adoption in IT depends upon “Human Capital” - an awful term for people who operate the tools. Are they engaged? Are they capable? Are they building fluency?

There just aren’t many use cases yet for fully autonomous agents. Valuable deployments entail human experts collaborating and guiding the AI.

This requires investment in tools and learning time. It requires leadership goals and employee autonomy to deliver them. It requires measuring not just usage but customer outcomes.

Leadership AIHuman Capital
knowledge emblem

The Nature of Truth and Deception

Remember what Harry Frankfurt wrote… bull—-t isn’t lying. It is prioritizing the effect of what you say over its truth.

To put it another way, AI doesn’t lie. People do. But both bull—t.

Leadership communicationtruth
change emblem

Navigating AI's Uncertain Future

I get leading with excitement. But, “AI is days from perfect so hop on or get out of the f—ing way!” is exhausting. As is the indifferent glee of celebrating everyone (else’s) jobs replaced by automation.

I know dwelling in uncertainty is unsatisfying. Describing complexity is hard. Contemplating ethical choices unpopular. But these are where the actual risks and opportunities lie.

The future is n days away. What do we do in the real world with consideration for real people here and now?

Leadership AIautomation
agility emblem

Empowering Top Performers in AI Adoption

In the rush to adopt AI, don’t hobble your best performers.

The way we institute code generation solutions in dev orgs could slow meaningful adoption and prevent high performing engineers from increasing their engagement, performance, and output while retaining quality and accountability.

This could happen from over-prescriptive definitions of “what works” or an attempt to provide guard rails for less capable engineers.

Marcus Buckingham describes this dynamic in “First, Break All the Rules.” Gallup research found that management structures designed to avoid failure from poor performers often undermine top performers. Hospitals rotate nursing shifts to prevent emotional attachment and burnout in struggling nurses, but this prevents the best nurses from forming personal connections that drive exceptional patient care.

Just as shift rotations prevent both poor nurses from burning out and excellent ones from excelling, AI policies and practices risk similar trade-offs: budget caps that prevent iterative refinement, agentic tools that hide realtime learnings from human operators, automated workflows that discourage frequent intervention, or transcription requirements that interfere with fluid problem-solving.

We’re still in early adoption. Please let your most capable people take risks, experiment, and learn. Enable them to master the tooling on their own terms or you will drag them down to the mean.

Leadership AI AdoptionLeadership
innovation emblem

AI Coding Tools and Recursive Loops

The behavior of the AI coding tools changes so fast. After a routine update my IDE ran into context size issues yesterday. Don’t know if that was the IDE or API. Today I saw something new where claude sonnet got caught in a recursive loop reviewing it’s actions to the coding plan.

“You’re absolutely right - I got caught in a recursive planning loop where I kept creating new plans instead of executing the existing solid foundation. This is a fascinating AI behavior pattern where meta-work becomes a substitute for actual work.”

It’s own suggested remedy, a guard to identify and break out of the loop didn’t fix it. Started a new thread.

Technology AICoding
agility emblem

Optimizing Code Generation Cycles

Trying to complete my code gen cycles on a of a story or task within a story to fit in 1-3 hours. This entails a full pdca loop of analysis, planning, tdd, completion check, and retro. Works much better for both the model and me.

Agile AgileProductivity
growth emblem

Navigating Career Uncertainty

It is a scary time to be dependent upon others for a livelihood, particularly if you are early or late in your career. That’s a fact of our moment.

In whatever place you find yourself, you have every right to your feelings about it. Even when people and inner voices gaslight and marginalize you.

My hope is that you stay hopeful. Remain curious and humble. Continue to learn and explore. Find joy in what work you have or are searching to find.

Culture careerhope
innovation emblem

Embracing AI as a Partner in Development

Be a software developer they can’t replace:

AI automation changes how we go about our work. It will change who stays and who joins us. But it will not change the essential challenge or whether an expert builder remains relevant.

  • Embrace continuous learning as your primary skill.
  • Take time to question assumptions until you understand and believe in the beneficial outcome you’re trying to achieve.
  • Learn from and advocate for the broadest community of users and stakeholders.
  • Use your hard-won experience of what works, what doesn’t, and what breaks.
  • Take accountability for what you build and its consequences.
  • Do this generously and courageously, even when it’s hard.

The work has always been about more than writing code. It’s about translating human needs into working systems. It’s about understanding what can go wrong and taking accountability when it does.

AI can help us build faster. It can suggest solutions and catch errors. But it can’t replace informed human judgment that asks, “What are we really trying to accomplish? Who might be harmed by this? What happens when this scales? Is this actually solving the right problem?”

The future belongs to developers who embrace both the tools and the responsibility. Those who see AI as a partner in building better systems, not a replacement for human wisdom. Those who understand that our expertise isn’t about knowing syntax—it’s about bridging the gap between human needs and technical possibility.

The question isn’t whether AI will change our industry. It’s whether we’ll rise to meet the challenge of building technology that truly serves humanity.

Technology AISoftware Development