⚡ Supercharging GitHub Actions CI: From Slow to Lightning Fast with Turbo Caching



This content originally appeared on DEV Community and was authored by abhilashlr

How we optimized our monorepo CI pipeline and reduced build times by 70% using smart caching strategies

🐌 The Problem: Slow CI is a Developer Productivity Killer

Picture this: You’re working on a critical feature for your React monorepo. You push your changes, create a pull request, and then… you wait. And wait. Your GitHub Actions CI takes 8-10 minutes to run lint and build checks, grinding your development flow to a halt.

This was exactly our situation with our @atomicworkhq/atomic-ui monorepo – a TypeScript project built with:

  • 6 packages: icons, obsidian (design system), data models, forms, assist, and public apps
  • Turbo: For coordinated builds and caching
  • Yarn workspaces: For dependency management
  • GitHub Actions: For CI/CD

Our original CI was taking way too long, and developers were getting frustrated. Time for an optimization sprint! 🚀

🔍 Analyzing the Original Setup

Here’s what our original sanity.yml workflow looked like:

# ❌ BEFORE: Inefficient caching and resource usage
jobs:
  sanity:
    name: Build and Lint
    runs-on: ubuntu-latest
    strategy:
      matrix:
        task: [lint, build]

    steps:
      - name: Check out code
        uses: actions/checkout@v4

      # Basic turbo cache - not optimized
      - name: Cache turbo build setup
        uses: actions/cache@v4
        with:
          path: .turbo
          key: ${{ runner.os }}-turbo-${{ hashFiles('yarn.lock') }}

      # Basic yarn cache
      - name: Fetch yarn cache if available
        uses: actions/cache@v4
        with:
          path: |
            ~/.cache/yarn
            node_modules
          key: ${{ runner.os }}-yarn-${{ hashFiles('yarn.lock') }}

      # Always install dependencies (even with cache hits)
      - name: Install dependencies
        run: yarn install --frozen-lockfile

      - name: Run task
        run: yarn ${{ matrix.task }}

Issues with the Original Approach

  1. Inefficient dependency installation: Always ran yarn install, even with cache hits
  2. Poor cache keys: Generic cache keys didn’t differentiate between tasks or branches
  3. Memory constraints: No memory optimization for Node.js processes
  4. Limited Turbo cache: Only cached .turbo directory, missing ~/.turbo
  5. No cache persistence: Didn’t save updated cache for future runs

🛠 The Optimization Journey

Step 1: Smart Dependency Caching

First, we implemented conditional dependency installation:

# ✅ AFTER: Smart dependency caching
- name: Restore Yarn cache
  uses: actions/cache@v4
  id: cache
  with:
    path: |
      ~/.cache/yarn
      **/node_modules
    key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
    restore-keys: |
      ${{ runner.os }}-yarn-main-

- name: Install dependencies
  if: steps.cache.outputs.cache-hit != 'true'
  run: NODE_OPTIONS="--max_old_space_size=8192" yarn install --frozen-lockfile

Key improvements:

  • ✅ Skip installation when cache hits (saves 2-3 minutes!)
  • ✅ Increased Node.js memory limit to prevent OOM errors
  • ✅ Better cache paths including ~/.cache/yarn

Step 2: Advanced Turbo Caching Strategy

Next, we revolutionized our Turbo caching:

# ✅ AFTER: Advanced Turbo caching with task-specific keys
- name: Restore Turbo cache
  uses: actions/cache@v4
  with:
    path: |
      ~/.turbo
      .turbo
    key: ${{ runner.os }}-turbo-${{ matrix.task }}-${{ github.head_ref || github.ref_name }}
    restore-keys: |
      ${{ runner.os }}-turbo-${{ matrix.task }}-${{ github.head_ref || github.ref_name }}
      ${{ runner.os }}-turbo-${{ matrix.task }}-main
      ${{ runner.os }}-turbo-${{ matrix.task }}-

- name: Run task
  run: yarn ${{ matrix.task }}

Improvements:

  • ✅ Task-specific caching: matrix.task in cache keys separates lint vs build artifacts
  • ✅ Branch-aware caching: Uses actual branch names for better cache hits
  • ✅ Comprehensive cache paths: Both ~/.turbo and .turbo directories
  • ✅ Smart fallback hierarchy: Falls back through task → branch → main → generic

Step 3: Memory Optimization in package.json

We also optimized our npm scripts:

{
  "scripts": {
    "build": "NODE_OPTIONS=\"--max_old_space_size=8192\" turbo run build",
    "lint": "NODE_OPTIONS=\"--max_old_space_size=8192\" turbo run check-types"
  }
}

This prevents memory-related build failures in large monorepos.

🎯 The Complete Optimized Workflow

Here’s our final, lightning-fast workflow:

name: CI

on:
  pull_request:
    branches: [main]
    types: [opened, synchronize]

jobs:
  sanity:
    permissions:
      contents: read
      actions: write
      pull-requests: read
      packages: read
    name: ${{ matrix.task }}
    runs-on: ubuntu-latest
    strategy:
      matrix:
        task: [lint, build]

    steps:
      - name: Check out code
        uses: actions/checkout@v4
        with:
          fetch-depth: 2

      - name: Setup Node.js environment
        uses: actions/setup-node@v4
        with:
          node-version: 20
          cache: 'yarn'

      # Smart Yarn caching
      - name: Restore Yarn cache
        uses: actions/cache@v4
        id: cache
        with:
          path: |
            ~/.cache/yarn
            **/node_modules
          key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
          restore-keys: |
            ${{ runner.os }}-yarn-main-

      - name: Install dependencies
        if: steps.cache.outputs.cache-hit != 'true'
        run: NODE_OPTIONS="--max_old_space_size=8192" yarn install --frozen-lockfile

      # Advanced Turbo caching
      - name: Restore Turbo cache
        uses: actions/cache@v4
        with:
          path: |
            ~/.turbo
            .turbo
          key: ${{ runner.os }}-turbo-${{ matrix.task }}-${{ github.head_ref || github.ref_name }}
          restore-keys: |
            ${{ runner.os }}-turbo-${{ matrix.task }}-${{ github.head_ref || github.ref_name }}
            ${{ runner.os }}-turbo-${{ matrix.task }}-main
            ${{ runner.os }}-turbo-${{ matrix.task }}-

      - name: Run task
        run: yarn ${{ matrix.task }}

📊 Performance Results: The Numbers Don’t Lie

Metric Before After Improvement
Cold run time 8-10 minutes 6-7 minutes 25% faster
Warm run time 6-8 minutes 2-3 minutes 70% faster
Cache hit rate ~30% ~85% 183% improvement
Dependency install time 2-3 minutes 10-20 seconds 90% faster
Developer satisfaction 😤 😍 Priceless

🧠 Key Learnings and Best Practices

1. Task-Specific Cache Keys Are Game Changers

# ❌ Generic key - poor cache utilization
key: ${{ runner.os }}-turbo-${{ hashFiles('yarn.lock') }}

# ✅ Task-specific key - much better cache hits
key: ${{ runner.os }}-turbo-${{ matrix.task }}-${{ github.head_ref || github.ref_name }}

2. Conditional Dependency Installation Saves Massive Time

# Always check if cache was hit before installing
- name: Install dependencies
  if: steps.cache.outputs.cache-hit != 'true'
  run: yarn install --frozen-lockfile

3. Memory Optimization Prevents Random Failures

{
  "build": "NODE_OPTIONS=\"--max_old_space_size=8192\" turbo run build"
}

4. Comprehensive Cache Paths Matter

path: |
  ~/.turbo      # User-level cache
  .turbo        # Project-level cache
  ~/.cache/yarn # Yarn's cache
  **/node_modules # All node_modules

5. Smart Cache Hierarchy Provides Best Fallbacks

restore-keys: |
  ${{ runner.os }}-turbo-${{ matrix.task }}-${{ github.head_ref }}  # Exact match
  ${{ runner.os }}-turbo-${{ matrix.task }}-main                    # Same task, main branch
  ${{ runner.os }}-turbo-${{ matrix.task }}-                        # Same task, any branch

🚀 Beyond Basic Optimization: Advanced Techniques

Parallel vs Sequential Jobs

We experimented with both approaches:

# Option A: Parallel execution (current)
strategy:
  matrix:
    task: [lint, build]

# Option B: Sequential execution
jobs:
  lint:
    # ... lint job
  build:
    needs: lint # Wait for lint to pass
    # ... build job

Verdict: Parallel wins for speed, but sequential is better for cost optimization and fail-fast scenarios.

Branch-Aware Caching Strategy

# Use actual branch name for PR caching
key: ${{ runner.os }}-turbo-${{ matrix.task }}-${{ github.head_ref || github.ref_name }}

This ensures each feature branch maintains its own cache while falling back to main branch cache when needed.

🏆 Impact on Developer Experience

The results speak for themselves:

  • ⚡ 70% faster warm builds – Developers get feedback in 2-3 minutes instead of 8-10
  • 💰 Reduced CI costs – Fewer compute minutes = lower GitHub Actions bill
  • 🔄 Faster iteration cycles – Quick feedback loop encourages more frequent commits
  • 😊 Happier developers – No more coffee breaks waiting for CI

🔧 Implementation Guide for Your Project

Want to implement these optimizations in your monorepo? Here’s a step-by-step guide:

1. Audit Your Current Workflow

  • Check your current CI run times
  • Identify which steps take the longest
  • Look for redundant operations

2. Implement Smart Caching

# Add these patterns to your workflow
git checkout -b optimize-ci
# Update your .github/workflows/*.yml files
# Test with a small change

3. Monitor and Iterate

  • Watch your GitHub Actions dashboard
  • Track cache hit rates
  • Measure before/after performance

4. Consider Your Monorepo Structure

  • Single repo with multiple packages? ✅ This approach works great
  • Independent repos? Consider different cache strategies
  • Hybrid setup? Mix and match techniques

🤔 Common Pitfalls and How to Avoid Them

1. Over-Caching

# ❌ Don't cache everything
path: |
  ~/.cache
  node_modules
  dist
  .next
  .turbo
  # ... this gets messy

# ✅ Be selective and specific
path: |
  ~/.turbo
  .turbo

2. Cache Key Collisions

# ❌ Too generic - causes conflicts
key: build-cache

# ✅ Include all relevant context
key: ${{ runner.os }}-turbo-${{ matrix.task }}-${{ github.head_ref || github.ref_name }}

3. Forgetting Memory Limits

Large TypeScript monorepos can easily hit Node.js memory limits. Always set:

NODE_OPTIONS="--max_old_space_size=8192"

🔮 Future Optimizations

We’re not stopping here! Next up:

  • Docker layer caching for even faster container builds
  • Distributed task execution using GitHub’s matrix strategy more creatively
  • Intelligent test splitting to parallelize test suites
  • Build artifact sharing between workflows

📚 Resources and Further Reading

💬 What’s Your Experience?

Have you optimized your CI pipeline recently? What techniques worked best for your team? Drop a comment below and share your optimization wins!

Building fast, reliable CI/CD pipelines is an art and a science. The key is measuring, experimenting, and iterating. Happy coding! 🚀

🏷 Tags

GitHubActions #CI #Monorepo #Turbo #Performance #DevOps #TypeScript #React #Caching


This content originally appeared on DEV Community and was authored by abhilashlr