Shrunk 1,000 lines of AI agent code to 50 lines.



This content originally appeared on DEV Community and was authored by OscarW18

Complexity in AI Orchestration

As AI agents become more sophisticated, we’re facing a paradox: the tools meant to simplify AI orchestration are becoming complexity monsters themselves. I’ve watched teams struggle with 500-line Python scripts defining what should be a simple 10-step workflow. We’re solving the wrong problem.

After building and deploying dozens of production AI systems, we at Julep convinced: YAML should be the universal language for AI agent workflows. Not because it’s trendy, but because it solves the actual problems teams face when building AI systems at scale.

The Problem With Code-First Approaches

Let’s be honest about what happens when you define AI workflows in code:

# This starts innocent enough...
def customer_support_workflow(message):
    sentiment = analyze_sentiment(message)

    if sentiment < 0.3:
        escalation = check_escalation_needed(message)
        if escalation:
            ticket = create_ticket(message)
            notify_human(ticket)
            return generate_escalation_response(ticket)

    context = fetch_context(message)

    # 200 lines later...
    # Good luck figuring out the actual flow

Three months later, you’re staring at a 1,000-line file trying to understand why the agent sometimes skips the knowledge base lookup. The business logic is tangled with orchestration logic, making both harder to modify.

Why YAML Changes Everything

YAML forces a fundamental shift in how we think about AI workflows. Instead of “how do I code this?”, we ask “what are the steps?”

Here’s the same workflow in YAML:

name: customer-support
steps:
  - action: analyze_sentiment
    input: $message
    label: sentiment_check

  - if: $steps.sentiment_check.score < 0.3
    then:
      - action: check_escalation
      - if: $_.needs_escalation
        then:
          - action: create_ticket
          - action: notify_human
          - action: generate_escalation_response

  - action: fetch_context
    input: $message

  - action: generate_response
    context: $steps.fetch_context.output

The flow is immediately visible. A junior developer can understand it. A product manager can review it. You can diff it, version it, and reason about it without executing it in your head.

The Technical Advantages Nobody Talks About

1. Deterministic Execution Paths

YAML workflows are essentially state machines. Each step has clear inputs and outputs. This makes them:

  • Trivially resumable after failures
  • Easy to debug with step-by-step replay
  • Perfect for audit logs and compliance

2. Language-Agnostic Orchestration

Your sentiment analyzer is in Python, your ticket system is a REST API, and your LLM calls are in TypeScript? YAML doesn’t care. It’s just orchestrating:

- action: sentiment_analysis
  runtime: python
  handler: ml.sentiment.analyze

- action: create_ticket
  runtime: http
  endpoint: POST /api/tickets

- action: generate_response
  runtime: typescript
  handler: llm/generateResponse

3. Parallel Execution for Free

When steps don’t depend on each other, a good YAML executor runs them in parallel automatically:

# These run simultaneously
- parallel:
    - action: fetch_user_history
      label: history
    - action: search_knowledge_base
      label: knowledge
    - action: get_similar_tickets
      label: tickets

# This waits for all three
- action: synthesize_response
  inputs:
    history: $steps.history.output
    knowledge: $steps.knowledge.output
    tickets: $steps.tickets.output

No thread management. No async/await gymnastics. Just declare what you want.

4. Testing That Actually Works

YAML workflows are pure functions: given an input, they produce an output. This makes testing beautiful:

# test-customer-support.yaml
tests:
  - name: angry_customer_escalation
    input:
      message: "This is completely unacceptable!"
    expect:
      - step: sentiment_check
        output: {score: 0.1}
      - step: create_ticket
        called: true

Mock the actions, not the entire workflow logic. Test the orchestration separately from the business logic.

All these now have become defining features of AI agents built on Julep

Real-World Patterns That Emerge

The Context Accumulator

- evaluate:
    context:
      user: $input.user_id
      timestamp: $now()
  label: init_context

# Each step enriches context
- action: fetch_user_profile
  output_to: context.profile

- action: fetch_recent_orders  
  output_to: context.orders

# Final step has everything
- prompt:
    messages:
      - role: system
        content: $template("support_agent", context)

The Circuit Breaker

- try:
    - action: call_external_api
      timeout: 5s
      retries: 3
  catch:
    - log: "API failed, using fallback"
    - action: use_cached_response

The Human-in-the-Loop

- action: generate_draft
  label: draft

- action: request_human_review
  input: $steps.draft.output
  timeout: 1h

- if: $_.approved
  then:
    - action: send_response
  else:
    - action: generate_revision
      feedback: $_.feedback

The Ecosystem Benefits

When everyone uses YAML, magic happens:

Workflow Marketplaces: Share workflows like npm packages. import: @community/customer-onboarding-v2

Visual Editors: YAML maps perfectly to visual flow builders. Non-technical users can build workflows.

Standardized Tooling: One debugger, one test framework, one deployment pipeline for all your AI workflows.

Cross-Platform Portability: Move workflows between LangChain, AutoGPT, CrewAI, or your custom framework by just changing the runtime.

Addressing the Skeptics

“But YAML isn’t a programming language!”

Exactly. That’s the point. Workflows should declare what happens, not implement how it happens. Put your complex logic in functions, call them from YAML.

“What about complex conditionals?”

- evaluate:
    should_escalate: |
      $sentiment < 0.3 and 
      $priority == "high" and
      $user.tier == "enterprise"

- if: $should_escalate
  then: [...]

Evaluate complex expressions, then branch on simple booleans.

“YAML is hard to validate”

Use schemas. Every solid YAML workflow engine supports JSON Schema:

input_schema:
  type: object
  required: [message, user_id]
  properties:
    message: {type: string, minLength: 1}
    user_id: {type: string, pattern: "^usr_"}

The Implementation Path

Start small. Pick one workflow. Convert it to YAML. You’ll need:

  1. A Schema: Define your step types and their properties
  2. An Executor: Interprets YAML and runs actions (plenty of open-source options)
  3. Action Libraries: Wrap your existing functions as callable actions
  4. Testing Framework: YAML in, assertions out

Don’t rewrite everything. Wrap your existing code and gradually migrate the orchestration layer.

The Future Is Declarative

The winning AI frameworks of the next decade will be those that separate orchestration from implementation. YAML is the perfect medium for this separation:

  • Human-readable but machine-parseable
  • Git-friendly for version control and collaboration
  • Language-agnostic for maximum flexibility
  • Structurally simple but expressively powerful

We’re building increasingly complex AI systems. Our orchestration layer should make that complexity manageable, not add to it. YAML workflows aren’t just a nice-to-have—they’re essential infrastructure for the AI-powered future.

Start Today

Pick your most painful workflow. The one everyone’s afraid to touch. Rewrite it in YAML. Make the flow visible. Make it testable. Make it maintainable.

Your future self will thank you when you’re debugging at 2 AM and can actually understand what your AI agent is supposed to be doing.

The best code is the code you don’t have to write. The second best is YAML that tells other code what to do.


This content originally appeared on DEV Community and was authored by OscarW18