Technology AICode GenerationSoftware Development

Keeping AI Focused in Code Generation

2 Technology Post AI, Code Generation, Software Development Jun 2, 2025 1748908800000

Here’s the approach I used to get my code gen agent to stay focused and deliver quality results when creating an adapter for a new database engine.

THE GENERAL PATTERN:

  1. Define the interface.

  2. Create solid integration test coverage for the existing implementation.

  3. Make that integration test inherit from a base class and move the setup and verification for each test into the base. The base class should handle the heavy lifting: preparing test data, invoking the interface, and defining the expected outcomes.

  4. Give the agent instructions to create a new test fixture that implements the base class and test drive the new implementation by making each test green one at a time.

  5. Instruct it to work on only one test at a time and not to proceed to the next test until every test in the fixture is green. (Incremental verification)

  6. Instruct it not to change any code except for the new implementation and its test fixture, i.e. it cannot change the base class or any other code in the existing application. (Explicit boundary)

  7. Tell it if it runs into any blocker (for example with the edit tool mis-applying its changes) to stop and let me fix. (Stop condition)

  8. Watch the prompt chain as it progresses and stop it whenever you see it drifting from its mission, making assumptions you disagree with, or breaking the rules.

This gave the agent freedom within constraints to make changes without introducing regressions.

NOTE: You have to review the code, particularly in the base class because the code gen agent will take shortcuts such matching to string literals instead of referencing those values from mappings outside its immediate context. So, this is not panacea for unattended code gen.

MY EXAMPLE:

I needed to add support to persist an abstract syntax tree to a new graph database. My integration test coverage consisted of 40+ tests for a data model that represents all the relevant symbols of the tree into a database schema that supports symbol nodes, symbol sources, versioning and the relationships between symbols.

My adapter is a write-only interface so the base class prepares the data objects to be saved, invokes the adapter method and declares assertions for how they should look in the database. Database-specific queries to retrieve the saved results remain in the inheriting test class.

Have you found patterns that work for keeping AI focused on the task?

Originally published on LinkedIn on Jun 2, 2025. Enhanced for this site with expanded insights and additional resources.