Time Slicing in React – How Your UI Stays Butter Smooth (The Frame Budget Secret)



This content originally appeared on DEV Community and was authored by Mohamad Msalme

Know WHY — Let AI Handle the HOW 🤖

In Part 1, we learned about priority-based rendering. In Part 2, we explored Fiber architecture. But here’s the final piece of the puzzle: How does React know WHEN to pause?

What if I told you React gives itself a strict 5ms budget per frame, and understanding this timing mechanism is the key to building silky-smooth user interfaces?

🤔 The 60 FPS Problem

Your screen refreshes 60 times per second. That gives you 16.67ms per frame to do everything:

One Frame (16.67ms):
├─ JavaScript execution (React rendering)
├─ Style calculations
├─ Layout
├─ Paint
└─ Composite

If ANY of this takes > 16.67ms:
→ Frame gets dropped
→ UI feels janky
→ User notices lag

The Challenge: How do you render expensive components without dropping frames?

🧠 Think Like a Video Game Engine for a Moment

Modern games run at 60fps by:

  1. Doing critical work (player movement, collisions)
  2. Checking the clock: “Do I have time left?”
  3. If yes, do nice-to-have work (background animations)
  4. If no, pause and continue next frame

React does the exact same thing!

function gameLoop() {
  const frameDeadline = performance.now() + 16.67;

  // Critical: Player movement
  updatePlayerPosition();

  // Check time remaining
  if (performance.now() < frameDeadline - 5) {
    // Nice-to-have: Background details
    renderDistantTrees();
  } else {
    // Out of time! Skip to next frame
    return;
  }
}

🔑 React’s Frame Budget: The 5ms Rule

React follows a simple rule: Work in ~5ms chunks, then check if we should yield.

// Simplified React work loop
function workLoopConcurrent() {
  // React's frame budget strategy
  const deadline = performance.now() + 5; // 5ms time slice

  while (workInProgress !== null) {
    // Do one unit of work
    workInProgress = performUnitOfWork(workInProgress);

    // Time to check if we should pause?
    if (performance.now() >= deadline) {
      // Used our 5ms, yield to browser
      break;
    }
  }

  if (workInProgress !== null) {
    // More work to do, schedule continuation
    scheduleCallback(workLoopConcurrent);
  } else {
    // Done! Commit to DOM
    commitRoot();
  }
}

Why 5ms?

  • 16.67ms per frame
  • -5ms for React work
  • = 11.67ms left for browser (layout, paint, user input)
  • Keeps UI at 60fps ✅

🎯 The shouldYield Check

This is where the magic happens:

function shouldYield() {
  const currentTime = performance.now();

  // Have we used our time slice?
  if (currentTime >= deadline) {
    return true; // Pause!
  }

  // Is there urgent work waiting?
  if (hasUrgentWork()) {
    return true; // Pause and handle urgent work!
  }

  // Keep going
  return false;
}

// Used in the render loop:
while (workInProgress && !shouldYield()) {
  workInProgress = performUnitOfWork(workInProgress);
}

💡 Real Example: Typing in Search Box

Let’s see EXACTLY what happens with millisecond precision:

function SearchPage() {
  const [query, setQuery] = useState('');
  const deferredQuery = useDeferredValue(query);

  const results = useMemo(() => {
    console.log('Filtering for:', deferredQuery);
    // Let's say this takes 50ms total
    return expensiveFilter(bigDataset, deferredQuery);
  }, [deferredQuery]);

  return (
    <div>
      <input 
        value={query}
        onChange={e => setQuery(e.target.value)}
      />
      <ResultsList items={results} />
    </div>
  );
}

Frame-by-frame breakdown when you type “r”:

Frame 1 (0-16ms):
├─ 0ms:   User types "r" (keypress event)
├─ 1ms:   query = "r" (HIGH PRIORITY state update)
├─ 2ms:   React starts render phase
│         → <input> fiber (SyncLane priority)
├─ 3ms:   Commit phase: Update DOM
├─ 4ms:   Input shows "r" on screen ✅
│         User sees immediate feedback!
├─ 5ms:   deferredQuery = "" (still old value)
├─ 6ms:   Start LOW PRIORITY render
│         → ResultsList fiber (TransitionLane)
│         → Start expensiveFilter("")
├─ 7ms:   Filter chunk 1/10 complete
├─ 8ms:   Filter chunk 2/10 complete
├─ 9ms:   Filter chunk 3/10 complete
├─ 10ms:  Filter chunk 4/10 complete
├─ 11ms:  shouldYield() = true (used 5ms slice)
└─ 12ms:  PAUSE! Save progress, yield to browser
          Browser uses remaining 4ms for:
          - Handling any input
          - Painting the input change
          - Smooth scrolling

Frame 2 (16-32ms):
├─ 16ms:  Resume LOW PRIORITY render
├─ 17ms:  Filter chunk 5/10 complete
├─ 18ms:  Filter chunk 6/10 complete
├─ 19ms:  Filter chunk 7/10 complete
├─ 20ms:  Filter chunk 8/10 complete
├─ 21ms:  shouldYield() = true
└─ 22ms:  PAUSE again

Frame 3 (32-48ms):
├─ 32ms:  Resume LOW PRIORITY render
├─ 33ms:  Filter chunk 9/10 complete
├─ 34ms:  Filter chunk 10/10 complete ✅
├─ 35ms:  Commit phase: Update DOM
└─ 36ms:  Results appear on screen!

The key: Input felt instant (4ms), while expensive work happened in background across 3 frames!

⚡ Interruption in Action

Now let’s see what happens when you keep typing:

Frame 1 (0-16ms):
├─ 0ms:   Type "r"
├─ 1ms:   query = "r"
├─ 4ms:   Input shows "r" ✅
├─ 6ms:   Start filtering "" → "r" (LOW PRIORITY)
├─ 11ms:  shouldYield() = true, PAUSE
└─ 12ms:  Browser gets control back

Frame 2 (16-32ms):
├─ 16ms:  Resume filtering for "r"
├─ 20ms:  25% done with filter...
├─ 21ms:  shouldYield() checks for urgent work
│
├─ 22ms:  ⚡ User types "e" (HIGH PRIORITY!)
│         shouldYield() = true (urgent work detected!)
│
├─ 23ms:  ABANDON current render
│         Throw away partial "r" filter work
│
├─ 24ms:  query = "re" (HIGH PRIORITY)
├─ 25ms:  Input shows "re" ✅
├─ 26ms:  deferredQuery updates to "r"
│         (but immediately cancelled by "re")
│
├─ 27ms:  Start NEW filtering "r" → "re"
└─ 28ms:  shouldYield() = true, PAUSE

// Old "r" filter NEVER completes or shows!
// React intelligently skipped that intermediate state

🔄 The Scheduler API

React uses the browser’s Scheduler API (with polyfill):

// Modern browsers (Chrome, Edge)
scheduler.postTask(() => {
  workLoopConcurrent();
}, { priority: 'background' });

// Fallback: MessageChannel for time slicing
const channel = new MessageChannel();
channel.port1.onmessage = () => {
  workLoopConcurrent();
};

function scheduleCallback(callback) {
  channel.port2.postMessage(null);
}

Why MessageChannel?

  • setTimeout(fn, 0) has 4ms minimum delay (too slow!)
  • requestAnimationFrame only runs before paint (wrong timing)
  • MessageChannel runs immediately after current task (perfect!)

🎯 Complete Real-World Example: Dashboard

function Dashboard() {
  const [metric, setMetric] = useState('revenue');
  const [isPending, startTransition] = useTransition();

  const switchMetric = (newMetric) => {
    startTransition(() => {
      setMetric(newMetric);
    });
  };

  return (
    <div>
      <Tabs selected={metric} onChange={switchMetric} />
      {isPending && <LoadingBar />}
      <ExpensiveChart metric={metric} />
    </div>
  );
}

function ExpensiveChart({ metric }) {
  const chartData = useMemo(() => {
    // This takes 80ms to compute
    const data = [];
    for (let i = 0; i < 10000; i++) {
      data.push({
        x: i,
        y: complexCalculation(metric, i)
      });
    }
    return data;
  }, [metric]);

  return <ChartLibrary data={chartData} />;
}

Frame timeline when switching from “Revenue” to “Profit”:

Frame 1 (0-16ms):
├─ 0ms:   User clicks "Profit" tab
├─ 1ms:   metric = "profit" (TRANSITION priority)
├─ 2ms:   Tab switches to "Profit" ✅
├─ 3ms:   isPending = true
├─ 4ms:   LoadingBar appears ✅
│         User sees immediate feedback!
├─ 5ms:   Start chart re-render (LOW PRIORITY)
│         Calculate data point 0
├─ 6ms:   Calculate data point 1
├─ 7ms:   Calculate data point 2
│         ... (calculating in loop)
├─ 10ms:  Calculate data point 500
├─ 11ms:  shouldYield() = true
└─ 12ms:  PAUSE (used 5ms slice)
          Progress saved: at data point 500

Frame 2 (16-32ms):
├─ 16ms:  Resume chart calculation
├─ 17ms:  Calculate data point 501
├─ 18ms:  Calculate data point 502
│         ... (calculating in loop)
├─ 21ms:  Calculate data point 1000
├─ 22ms:  shouldYield() = true
└─ 23ms:  PAUSE again
          Progress saved: at data point 1000

// This continues across ~16 frames (80ms / 5ms per frame)

Frame 16 (240-256ms):
├─ 240ms: Resume chart calculation
├─ 241ms: Calculate data point 9998
├─ 242ms: Calculate data point 9999
├─ 243ms: All calculations complete! ✅
├─ 244ms: Commit phase: Update DOM
├─ 245ms: New chart renders
├─ 246ms: isPending = false
└─ 247ms: LoadingBar disappears

Total time: 247ms
But UI stayed responsive the entire time! 🎉

🏠 The Perfect Analogy: Restaurant Kitchen

Think of React’s time slicing like a restaurant kitchen:

Without Time Slicing (Old React):

  • Chef starts making a complex dish
  • New urgent order comes in (appetizer)
  • Chef: “Sorry, I have to finish this entrée first”
  • Customer waits 20 minutes for a simple appetizer 😡

With Time Slicing (Concurrent React):

  • Chef starts making complex entrée (5 min work)
  • After 30 seconds, checks: “Any urgent orders?”
  • Urgent appetizer comes in!
  • Chef: “Let me pause the entrée”
  • Makes appetizer immediately (2 min)
  • Returns to entrée
  • Both customers happy! 😊

🧪 Suspense with Time Slicing

Time slicing makes Suspense for data fetching smooth:

function ProfilePage({ userId }) {
  const [isPending, startTransition] = useTransition();

  const switchUser = (newId) => {
    startTransition(() => {
      setUserId(newId);
    });
  };

  return (
    <Suspense fallback={<Skeleton />}>
      <ProfileDetails userId={userId} />
    </Suspense>
  );
}

function ProfileDetails({ userId }) {
  const user = use(fetchUser(userId)); // Suspends

  // Heavy computation after data loads
  const stats = useMemo(() => {
    return calculateComplexStats(user);
  }, [user]);

  return <ProfileView user={user} stats={stats} />;
}

Timeline when switching users:

Frame 1 (0-16ms):
├─ 0ms:   Click "Switch User"
├─ 1ms:   userId = 2 (TRANSITION priority)
├─ 2ms:   Start render ProfileDetails
├─ 3ms:   Suspend! (waiting for data)
├─ 4ms:   Old profile STAYS VISIBLE (smooth!)
└─ 5ms:   Inline spinner shows

... Network request in flight ...

Frame 50 (800-816ms):
├─ 800ms: Data arrives! fetchUser(2) resolves
├─ 801ms: Resume ProfileDetails render
├─ 802ms: Start calculateComplexStats (expensive!)
├─ 807ms: shouldYield() = true
└─ 808ms: PAUSE calculation

Frame 51 (816-832ms):
├─ 816ms: Resume calculateComplexStats
├─ 821ms: Calculation complete!
├─ 822ms: Commit phase
└─ 823ms: New profile smoothly appears ✅

No jarring skeleton screen!
Old content stayed visible during load!

🎯 Performance Monitoring

You can actually see time slicing in action:

function ExpensiveComponent({ data }) {
  // Log when rendering starts/pauses
  console.log('Render start:', performance.now());

  const result = useMemo(() => {
    const start = performance.now();
    const computed = expensiveComputation(data);
    const end = performance.now();
    console.log(`Computation took: ${end - start}ms`);
    return computed;
  }, [data]);

  console.log('Render end:', performance.now());
  return <div>{result}</div>;
}

// Console output:
// Render start: 0.5ms
// Render end: 1.2ms (fiber created)
// (pause - browser handles other work)
// Computation took: 50ms (spread across 10 frames!)
// (pause - browser handles other work)
// Render start: 52ms (commit phase)
// Render end: 53ms

🧠 The Mental Model Shift

Stop Thinking:

  • “React renders everything at once”
  • “Long computations always block the UI”
  • “I need to manually split work with setTimeout”

Start Thinking:

  • “React renders in 5ms chunks”
  • “Long work is automatically split across frames”
  • “Browser gets control back between chunks”
  • “UI stays responsive even during heavy work”

💭 The Takeaway

Many developers learn the HOW: “Use useDeferredValue and it makes things faster.”

When you understand the WHY: “React works in 5ms time slices, yielding control back to the browser after each slice to maintain 60fps, and can pause/resume work at any fiber node,” you gain insights that help you:

  • Understand why some operations feel instant
  • Know when concurrent features actually help
  • Debug performance with precise timing knowledge
  • Build UIs that feel professional and responsive


This content originally appeared on DEV Community and was authored by Mohamad Msalme