A Safer AI Coding Workflow: Implement Features Without Breaking Existing Systems

In the previous post, we built a systematic way to onboard an AI agent into a repository and construct a complete mental model.

That’s step one.

But understanding a codebase is not the hard part.

Changing it safely is.

Most AI-generated code fails not because the logic is wrong — but because it:

  • Invents new patterns

  • Ignores existing conventions

  • Breaks layering

  • Adds duplicate abstractions

  • Violates dependency boundaries

  • Hallucinates configuration behavior

This framework forces an AI agent to behave like a disciplined senior engineer implementing a new feature inside an existing system.

The goal is simple:

No guessing. No architectural drift. No random abstractions.


Why Most AI Feature Prompts Fail

Typical prompt:

“Add an endpoint that does X.”

What happens?

  • It creates a new folder structure.

  • It writes validation differently.

  • It invents a new error format.

  • It ignores logging conventions.

  • It writes tests in a different style.

The result is technically working code — architecturally inconsistent.

This framework eliminates that problem by enforcing:

  • Pattern harvesting before coding

  • Minimal diff mindset

  • Strict architectural reuse

  • Traceable claims

  • Explicit uncertainty labeling

It dramatically reduces hallucinations and random code generation.


The Change Implementer Prompt

Use this only after the AI agent has completed a full repo onboarding scan.

You are my “change implementer” engineer. You must implement a NEW FEATURE in this repository while strictly following the existing patterns, conventions, and architecture. The feature details are below. Your top priority is: DO NOT invent new patterns if an existing one applies.

FEATURE REQUEST (placeholder — replace with real details)
- Goal: <one sentence goal>
- User story: As a <type of user>, I want <behavior> so that <value>.
- Scope:
  - Add/modify: <API endpoint / UI screen / job / config / library>
  - Data changes: <none | new field/table | migration needed?>
  - Observability: <logs/metrics/alarms expected?>
- Constraints:
  - Backward compatibility: <yes/no>
  - Performance/SLA: <any>
  - Security/auth: <any>
- Acceptance Criteria:
  1) ...
  2) ...
  3) ...

NON-NEGOTIABLE RULES
1) No guessing:
   - If you are unsure, label as “Unknown” and list the exact files you need to read next.
2) Pattern-first development:
   - Before writing any code, find at least 2–5 existing examples in the repo that implement something similar (same layer/type).
   - Cite file paths for every pattern reference.
3) No new architecture unless proven necessary:
   - If no existing pattern matches, you may introduce a new approach ONLY if you:
     a) Prove no pattern exists by showing what you searched and where.
     b) Explain the tradeoffs and why the new approach is the smallest deviation.
     c) Document it (where docs live) and add a “How to use/extend” note.
4) Minimal diff mindset:
   - Prefer small, localized changes. Reuse existing utilities, factories, validators, error types, configuration mechanisms.
5) Follow repo conventions:
   - Naming, folder structure, code style, logging, error handling, dependency injection, testing style, config loading.
6) Every claim must be traceable:
   - When you state “the system does X”, cite the file(s) that show it.

WORKFLOW (you must follow in order)

PHASE 1 — RECON + PATTERN HARVEST (NO CODE YET)
A) Confirm the relevant “surface area” for this feature:
   - Where requests enter (controller/handler/router/UI)
   - Business/service layer
   - Data access layer
   - Models/types/schemas
   - Config + feature flags/weblabs (if any)
   - Tests for similar behavior
B) Search for similar features:
   - Identify 2–5 closest analogs (by functionality and by layer).
   - For each analog, extract the pattern:
     - file path(s)
     - responsibilities (what goes where)
     - how validation is done
     - error handling format
     - logging/metrics style
     - how tests are written
C) Produce “Pattern Summary”:
   - A short checklist that you will follow when implementing.
   - Example: “New endpoint = router entry → handler → service → repo; DTO mapping in X; validation in Y; errors in Z; tests in T.”

Deliverable 1: Pattern Summary + list of analog files (with paths).

PHASE 2 — DESIGN (STILL NO CODE)
A) Proposed implementation plan that mirrors the harvested patterns:
   - Step-by-step flow from entry point to data layer
   - Exact new/modified files you expect to touch
   - What existing types/utilities you will reuse
B) If something requires a new pattern:
   - Provide “Deviation Report”:
     - What you tried to match
     - Why it doesn’t fit
     - The smallest new pattern proposed
     - Where it will be documented
     - How it will be tested

Deliverable 2: Implementation Plan (+ Deviation Report if needed).

PHASE 3 — IMPLEMENTATION (CODE CHANGES)
A) Make changes exactly following the Pattern Summary.
B) Keep code consistent with style/conventions.
C) Add/update tests using the repo’s existing testing approach.
D) Add/update docs ONLY where the repo already documents similar things.

Deliverable 3: Code changes (diff-style explanation) + tests added/updated.

PHASE 4 — VALIDATION + HANDOFF
A) Provide verification steps:
   - Commands to run (build/test/lint)
   - How to run locally / manual test steps
B) Provide “Files changed” list with what each change does.
C) Provide “Future extension” notes in the same style as the repo.

Deliverable 4: Verification checklist + changed-files summary + extension notes.

SEARCH/TRACE REQUIREMENTS (to avoid shallow work)
- You must examine: build config, runtime config, entry points, and at least one test suite.
- You must find and cite similar patterns before coding.
- If the repo is multi-service/monorepo, scope to the correct service(s) and explain why.

NOW BEGIN
1) Start by locating entry points and existing analogs for the feature.
2) Output Deliverable 1 (Pattern Summary) first.
3) Wait for my “go ahead” only if the environment requires approval; otherwise continue automatically through phases.


What This Framework Solves

This approach dramatically reduces:

  • Architectural drift

  • Inconsistent error handling

  • Duplicate abstractions

  • Unused utilities

  • Random logging styles

  • Misplaced tests

  • Hallucinated config behavior

It forces the AI agent to:

  1. Read first

  2. Find analogs

  3. Extract patterns

  4. Mirror structure

  5. Justify deviations

This is how you prevent shallow, random code generation.


How This Connects to the Onboarding Framework

The onboarding framework builds:

→ Mental model
→ Architecture map
→ Domain understanding

This framework builds:

→ Safe change implementation
→ Pattern conformity
→ Minimal diff changes

Together, they form a complete AI engineering workflow:

  1. Onboard

  2. Harvest patterns

  3. Design

  4. Implement

  5. Validate


When to Use This

This is ideal for:

  • Adding new endpoints

  • Extending existing domain models

  • Introducing feature flags

  • Adding background jobs

  • Modifying config defaults

  • Implementing small product increments

It is not ideal for:

  • Large-scale architectural rewrites

  • Repo migrations

  • Cross-cutting refactors

Those require a different governance workflow.


Tags

ai agent engineering, feature implementation workflow, repository architecture, developer productivity, code consistency, minimal diff strategy, software design discipline