innovation emblem

Bridging the Gap in AI Adoption

My background in media has prepared me for the fact that people who buy AI automation for tech teams are different than the people they need to use it. Success lies in helping staff the way they themselves see as useful while delivering the gains the bosses need to justify spend.

Leadership AIautomation
innovation emblem

Modernizing Legacy .NET with GraphRAG

Here’s what I’m coding full time on (when I’m not consulting with clients). A GraphRAG to support Legacy .NET modernization:

Code — particularly in strongly typed, compilable languages — forms a natural network structure. We analyze those relationships in C# or VB app and SQL database code, store them in a database as knowledge graphs, then use graph queries to trace dependencies or similarity search to identify patterns. Agent responses get this context via Model Context Protocol (MCP).

All this to provide better context and more precise agent responses for analysts, architects, and engineers looking to understand, communicate, and rewrite their legacy .NET applications.

Technology Legacy ModernizationGraph Technology
innovation emblem

Embracing AI with Accountability

People can embrace the societal benefits of AI-assisted software, earn compensation using these tools, insist on human accountability and the primacy of human expertise. All the while remaining skeptical of over-promises, wary of unethical practices, and advocates against harm.

Technology AIAccountability
growth emblem

Refactoring and Domain Model Improvement

One sign your refactoring is improving your domain model is when you find your linter flagging a whole bunch of now unused namespace import statements after the changes.

Technology refactoringdomain model
agility emblem

Empowering Software Engineers in AI Automation

I believe that as AI automation becomes integral to software development, human accountability is more crucial than ever. I advocate for a collaborative approach where skilled engineers work alongside automation to enhance learning and minimize errors. We must remain vigilant as we innovate, ensuring that software's societal impact is carefully considered. Empowering engineers with authority is essential for responsible progress.

Technology AISoftware Engineering
innovation emblem

Importance of Accurate Domain Models

With Code Generation, an accurate Domain Model and naming scheme are even more important. If you think humans get confused by namespace, object and method names that don’t accurately describe what they do, try dealing with the wrong paths an AI agent veers towards because of them.

Technology Code GenerationDomain Model
innovation emblem

Implementing a Coding Prompt Adventure

I used this coding prompt to build out a new implementation to an existing interface. It worked well, creating a “chose your own adventure” experience where the coding agent frequently stopped to ask me which direction to take (whether to patch the class or change the test expectation or which of two approaches to take to a change or whether to insert more diagnostic console writes).

“You can run the @ScoredOutputNodeTree.cs and use the failures to drive changes into @ScoredOutputNodeTree.cs . You are only allowed to make changes to those two files. After any change run the entire fixture to ensure the net number of red tests decreases. If you have any issues with the edit tool, want to make changes beyond the scope of those two files, or are creating regressions stop and let me know how I can help.

Technology codingsoftware development
innovation emblem

Leveraging Tools with AI Support

When existing tools can handle part of a task - use the tools. Employ generative AI to fill practical gaps in existing tools where its capabilities are worth the unpredictability. Do so with clear guardrails and human oversight. Keep expert humans responsible for how the work is performed and accountable for the outcome.

Leadership AIProductivity
leadership emblem

Empowering Teams Through Generosity

I’ve worked with technical people who are both exceptionally capable and generous givers. Through their own performance and actions towards others they help co-workers improve their own craft and instill a drive to push for practices and conditions that enable teams to flourish.

Leadership leadershipteamwork
innovation emblem

Keeping AI Focused in Code Generation

I shared my approach to ensure my code generation agent stays focused and delivers quality results while creating a database adapter. By defining clear interfaces, establishing robust integration tests, and setting boundaries for the agent's modifications, I was able to maintain control and prevent regressions. This method allowed me to leverage AI effectively while ensuring high-quality outcomes. Have you found similar patterns that work?

Technology AICode Generation
innovation emblem

Key Insights on Code Generation with AI

To summarize what’s clicked for me this week working with a code gen IA. A good set of instructions likely includes:

Incremental Verification - using tests to coerce the agent to make smaller changes and verify before making additional changes - whether truly test first or test after.

Explicit boundaries - what files can the agent change and what files should it not change in order to implement the feature. Prohibit unmanaged changes files that might introduce regressions. The agent will stop and describe what it wants to do (Very powerful).

Stop conditions - Conditions under which an agent should stop attempting to make changes and ask for help, i.e. the edit tool has misapplied a change and altered the code files in ways the agent didn’t intend, it introduces a regression in the test suite, etc.

If you have other types of guards you add to your instructions, please let me know.

Technology AICode Generation
innovation emblem

Challenges of Unsanctioned AI Use

Over reliance and unsupervised use of agentic AI creates waste and quality issues:

Here’s a small example: I need to change a method signature of an interface and all implementations and tests accordingly.

  • A refactoring tool uses static analysis to identify the dependencies and make the changes pretty instantaneously — Fast, comprehensive, and marks areas of code that need additional changes.

  • A code gen agent, changes the interface, searches for places that might require a change, compiles, finds errors, fixes them, repeat, repeat. Each prompt/response racks up increasingly more expensive compute costs. In each iteration the agent can go off on a tangent and make other changes while its there, planting hidden defects.

What are you doing to use the tools appropriately, retain your control over outcomes while benefiting from the speed gain?

Technology AISoftware Development