Why I’m Ditching OpenCode and Moving to Gemini CLI



This content originally appeared on DEV Community and was authored by jxlee007

I’ve been experimenting with OpenCode as my in-terminal AI assistant—loading workflows, driving rapid prototyping, and integrating Agent OS standards. But at this early, scratch-phase of my React Native + Expo + Convex build, I need stability, simplicity, and full control over every prompt. That’s why I’m pivoting to Gemini CLI. Below, I’ll explain the rationale, outline the workflow adjustments, and share a roadmap for a smooth transition.

🚧 The Limits of OpenCode Today

  1. Rapidly Evolving, But Unstable

    • OpenCode v0.3.x still lacks a hosted UI, robust CI integration, and reliable multi-agent coordination.
    • Terminal-only interface makes context management opaque when sessions grow long.
  2. Auto-Injected Context vs. Explicit Control

    • OpenCode’s magic (auto-loading instructions from opencode.json) is convenient, but brittle when configs change.
    • Agent OS files can get lost in auto-compaction, leading to unpredictable prompt behavior.
  3. Model Integration Inconsistency

    • Support for Claude, Gemini, local LLMs is spotty—some models work, others break.
    • At this stage I need guaranteed access to Gemini’s advanced capabilities.

🔁 What Changes with Gemini CLI

Aspect OpenCode Gemini CLI
Invocation /build, /plan, /execute gemini run "<instruction>"
Context Loading Automatic via opencode.json Manual: pipe or embed files
Session Memory In-session persistence Stateless, per-call only
Orchestration Built-in modes & YAML config Shell scripts + manual prompts
File Edits Agent writes directly You confirm and paste outputs

🛠 Adapting Agent OS for Gemini CLI

  1. Flatten Each Instruction

    • Ensure every .md in .agent-os/instructions/core/ is self-contained (e.g. no cross-links).
    • Example: execute-task.md starts with “Step 1: Load project context…” and ends with “Step N: Commit changes.”
  2. Create Helper Scripts

    • scripts/ai/analyze.sh
     cat .agent-os/instructions/core/analyze-product.md \
       | gemini run "Analyze my React Native + Convex codebase and draft Phase 0 roadmap"
    
  • scripts/ai/spec.sh

     cat .agent-os/instructions/core/create-spec.md \
       | gemini run "Create a spec for $1"
    
  1. Pipe Multiple Context Files

    • When you need standards + instructions in one go:
     cat .agent-os/standards/*.md \
         .agent-os/instructions/core/execute-task.md \
       | gemini run "Implement password-reset screen using Expo + Convex"
    
  2. Embed Prompts Directly

    • For smaller tasks, skip cat:
     gemini run "You are an AI developer. Follow execute-task.md to build login screen."
    

📈 Workflow Roadmap

  1. Phase 0: Project Analysis
   ./scripts/ai/analyze.sh
Generate a “Phase 0” roadmap, capture what’s built, and outline next high-level goals.

Phase 1: Spec & Task Breakdown

./scripts/ai/spec.sh "login flow"
Produce a detailed spec with user stories, success criteria, and sub-tasks.

Phase 2: Task Execution


./scripts/ai/execute.sh "login flow"
Implement components, Convex handlers, tests, and commit according to standards.

Phase 3: Review & Documentation

gemini run "Review recent commits for security and UX issues."
gemini run "Update README and roadmap.md for completed features."

🎯 Why This Works

  • Stability & Predictability: Gemini CLI’s stateless model means every
    run is fresh—no hidden state or session drift.

  • Full Control Over Context: I choose exactly which standards or instructions to load each time.

  • Agile Integration: Shell scripts automate repetitive steps, letting me focus on feature design, not tooling.

  • Agent OS Agnostic: My core workflows and standards live in .agent-os unchanged—only the orchestration layer shifts.


This content originally appeared on DEV Community and was authored by jxlee007