Aggressive Proximity Patterns: Teaching AI Agents to Write Production-Ready Code



This content originally appeared on DEV Community and was authored by Ian Hogers

Your AI Code Sucks. Here’s How to Fix It

The brutal truth: Your AI writes code that works today and breaks tomorrow. Nobody knows why it does what it does. Three weeks later, you’re debugging magic numbers and wondering if that Redis choice was intentional or random.

Let’s fix this mess.

The Real Problem: AI Code Anxiety

You know that feeling. The AI just generated 200 lines of seemingly perfect code. It compiles. Tests pass. But your stomach is in knots because:

You have no idea if it’s actually correct.

You’re staring at it, trying to reverse-engineer the logic:

  • “Is this Redis choice deliberate or did it just copy from Stack Overflow?”
  • “Will this break under load? Who knows?”
  • “Is this secure? Time to spend 3 hours verifying…”
  • “Why 100ms timeout? Is that tested or arbitrary?”

This anxiety is real. You’re not stupid. The code gives you zero confidence because it lacks the one thing that builds trust: transparent reasoning.

You’re basically accepting code from a brilliant intern who refuses to explain anything. Sure, it might be genius. Or it might explode in production. You won’t know until 3am when your phone rings.

This isn’t AI’s fault. You’re asking it to write code without teaching it to explain its thinking. That’s like hiring a developer and telling them “never document anything, never explain your decisions.”

The Fix: Put Context Where It Belongs

Stop writing documentation. Start embedding decisions directly in the code. Every choice, every trade-off, every rejected alternative – RIGHT THERE where the decision lives.

No separate docs. No wiki pages. No “see documentation” comments. Just pure, brutal transparency at the point of impact.

Here’s a real example from an agent orchestrator I was working on (from spiral-core):

/// ⏱ TASK POLLING INTERVAL: Balance between responsiveness and CPU usage
/// Why: 100ms provides near-real-time feel without excessive CPU overhead
/// Alternative: 50ms (rejected: 2x CPU usage), 500ms (rejected: sluggish UX)
/// Calculation: ~10 polls/second = reasonable for human-perceived responsiveness
pub const TASK_POLL_INTERVAL_MS: u64 = 100;

/// 🚦 MAX QUEUE SIZE: Memory protection for 8GB VPS deployment
/// Why: 1000 tasks ≈ 1MB RAM (1KB avg task) provides safety margin
/// Calculation: 8GB total - 2GB OS - 4GB app = 2GB buffer ÷ 1KB = 2M tasks theoretical
/// Conservative: 1K tasks allows for larger tasks and system overhead
/// Alternative: 10K (rejected: potential OOM), 100 (rejected: too restrictive)
pub const MAX_QUEUE_SIZE: usize = 1000;

See that? Real numbers. Real trade-offs. Real alternatives that were actually considered and rejected for specific reasons. Not because someone said so, but because the math doesn’t lie.

The Five Rules. No Exceptions

1. Document Decisions Where You Make Them

Stop pretending you’ll update that wiki. You won’t. Write it where it lives:

# 🧠 DECISION: Using Redis for session storage
# Why: Need atomic operations for concurrent session updates
# Alternative: In-memory store - rejected due to multi-server deployment
# Impact: Adds Redis dependency but ensures session consistency
session_store = RedisSessionStore(connection_pool)

2. Tests Next to Code, Always

Your test folder structure is a lie. Put tests where they belong:

src/
├── user_service.py
├── user_service_test.py    # RIGHT HERE, not in /tests/unit/services/user/
├── auth/
│   ├── authenticator.rs
│   └── authenticator_test.rs  # Colocated, not lost in /tests/

3. Abstract on Third Use, Not First

Premature abstraction kills more projects than no abstraction. Count to three:

// Strike 1: UserController validation (leave it)
// Strike 2: AdminController validation (still leave it)
// Strike 3: ApiController validation → NOW extract to ValidationUtils
const ValidationUtils = require("./utils/validation");

4. Error Context Where Errors Happen

Stop logging “error occurred”. Tell me what actually broke:

if err != nil {
    // 🚨 ERROR CONTEXT: Database connection during user auth
    // Common causes:
    // 1. Connection pool exhausted (check MAX_CONNECTIONS)
    // 2. Database server down (check health endpoint)
    // 3. Network partition (check firewall rules)
    // Debug: Enable DB_DEBUG=true for connection logging
    return fmt.Errorf("auth failed at DB layer: %w", err)
}

5. Security Reasoning at Security Points

Don’t hide your threat model in documentation. Put it where you enforce it:

// 🛡 SECURITY: Rate limiting per IP to prevent brute force
// Threat: Password spraying attacks
// Mitigation: 5 attempts per 15min window per IP
// Bypass: Internal IPs in TRUSTED_NETWORKS env var
// Monitoring: Alerts sent after 3 failed attempts
const rateLimiter = new RateLimiter({
  windowMs: 15 * 60 * 1000,
  max: 5,
  skipSuccessfulRequests: true,
});

Stop the Anxiety: Make Your AI Explain Itself

The Prompt That Changes Everything

Here’s the exact text that turns anxiety-inducing AI code into code you can actually trust (see more detailed templates here):

Generate code following aggressive proximity patterns:

- Add 🧠 DECISION comments explaining non-obvious choices
- Include WHY in comments, not WHAT
- Document alternatives considered and why rejected
- Colocate tests with implementation
- Provide rich error context at error sites
- Mark security decisions with 🛡 SECURITY comments
- Document performance choices with ⚡ PERFORMANCE comments

Example format for decisions:
// 🧠 DECISION: [What you decided]
// Why: [Specific reason]
// Alternative: [What else you considered] - rejected: [Why rejected]
// Impact: [What this means for the system]

The Difference Is Night and Day

BEFORE (Anxiety-inducing mystery code):

const BATCH_SIZE = 50;

async function processBatch(items) {
  const chunks = [];
  for (let i = 0; i < items.length; i += BATCH_SIZE) {
    chunks.push(items.slice(i, i + BATCH_SIZE));
  }
  return await Promise.all(chunks.map(processChunk));
}

AFTER (Code you can actually trust):

// ⚡ PERFORMANCE: Batch size for async processing
// Why: 50 items balances memory usage vs parallelization overhead
// Measured: 50 items = ~200ms per batch on typical 2-core container
// Alternative: 100 (rejected: memory spikes >500MB), 10 (rejected: too many promises)
// Monitoring: Track via BATCH_PROCESS_TIME metric
const BATCH_SIZE = 50;

async function processBatch(items) {
  // 🧠 DECISION: Using Promise.all for parallel processing
  // Why: All chunks independent, fail-fast behavior desired
  // Alternative: Sequential processing (rejected: 10x slower)
  // Alternative: Promise.allSettled (rejected: need immediate failure on error)
  const chunks = [];
  for (let i = 0; i < items.length; i += BATCH_SIZE) {
    chunks.push(items.slice(i, i + BATCH_SIZE));
  }

  return await Promise.all(chunks.map(processChunk));
}

// processBatch.test.js would be RIGHT HERE in the same directory

Measuring Success: The Proximity Score

Your code gets scored on:

  • Decision Coverage (25%): Are significant decisions documented?
  • Test Colocation (25%): Tests next to implementation?
  • Abstraction Timing (20%): Following the 3-strikes rule?
  • Comment Quality (15%): Context-rich, not redundant?
  • File Organization (15%): Related code physically close?

What Actually Changes When You Do This

  1. Less AI Code Anxiety: You know exactly why every decision was made
  2. Faster Debugging: Context is right there, no archaeology needed
  3. New Devs (ai or real) Productive in Days, Not Weeks/Months: Everything explains itself
  4. Code Reviews Become Discussions, Not Interrogations: Decisions pre-explained
  5. Sleep Better: Your AI(junior) code won’t surprise you at 3am

Just Start. Today. Now

  1. Pick your worst piece of AI code: The one that scares you
  2. Add ONE decision comment: Explain the scariest part
  3. Update your AI prompt: Copy the enhancement above
  4. Watch your anxiety disappear: Seriously, it’s that simple

Critical Patterns for High-Stakes Code

For Critical Systems

# 🧠 DECISION: Using database transactions for payment processing
# Why: Atomicity required - either all operations succeed or all rollback
# Risk: Without transaction - partial payment state on failure
# Compliance: PCI-DSS requirement 10.2.1 for atomic financial operations
# Rollback trigger: Any exception or payment gateway timeout >30s
with db.transaction() as txn:
    # ... payment logic

For Performance-Critical Code

// ⚡ PERFORMANCE: Pre-allocated Vec with known capacity
// Why: Avoids 3-4 reallocations during typical 1000-item processing
// Measured: 15% faster for p99 latency (120ms → 102ms)
// Memory: 1000 * size_of::<Item>() = ~8KB upfront allocation
// Alternative: Vec::new() (rejected: reallocation overhead)
let mut results = Vec::with_capacity(1000);

For Security Boundaries

// 🛡 SECURITY: Input sanitization for SQL injection prevention
// Threat: User input in 'search' param could contain SQL
// Mitigation: Parameterized query + allowlist validation
// Pattern: ^[a-zA-Z0-9\s-_]{1,100}$
// Logging: All rejected inputs logged to security_events
// Testing: See sql_injection_test.go for attack vectors
if !isValidSearchTerm(userInput) {
    logSecurityEvent("invalid_search_term", userInput)
    return ErrInvalidInput
}

The Mindset That Changes Everything

Here’s the truth: Documentation is dead. It was dead the moment you wrote it. Nobody updates it, nobody reads it, and it’s always wrong.

Your code is the only truth. Make it tell the whole truth.

When you embed decisions in code, you’re not documenting – you’re having a conversation with every future developer or ai agent who touches this code. Including yourself at 3am when production is down.

Do This Right Now

  1. Open your scariest AI-generated file: You know which one
  2. Add ONE proximity comment: Explain the decision that confuses you most
  3. Feel the relief: That anxiety? It’s already fading
  4. Never go back: Once you see the difference, you can’t unsee it

Join Us in Killing Documentation Forever

We’re done with lies. Done with outdated wikis. Done with “see documentation” comments that lead nowhere.

Code should explain itself. At the point of decision. No excuses.

Contribute your patterns. Share what works. Help us make AI code trustworthy by default.

Final truth: You’re either writing code that explains itself, or you’re creating tomorrow’s technical debt.

Stop the anxiety. Start the proximity. Make your AI code trustworthy.

github.com/aggressive-proximity-patterns

Show me your before/after. Prove me wrong. Or prove me right.

Let’s Be Honest

  1. What AI-generated code keeps you up at night?
  2. How many times have you rewritten AI code because you didn’t trust it?
  3. What would change if every line of code explained itself?


This content originally appeared on DEV Community and was authored by Ian Hogers