React Performance Optimization — Part 4: Debouncing, Throttling & Request Batching



This content originally appeared on DEV Community and was authored by Sachin Maurya

React Performance Optimization — Part 4: Debouncing, Throttling & Request Batching### ✍ Intro:

Ever felt your app making too many API calls when typing or scrolling? Or your UI freezing during fast interactions? That’s where debouncing, throttling, and request batching come in — underrated techniques that can drastically reduce unnecessary renders and network overhead.

In this part of the React Performance series, we’ll break down:

  • What these techniques are
  • When and how to use them
  • Real-world examples in React
  • Tools and patterns that help

🚀 Main Content:

🔄 Debouncing: Delay the Action

  • What is it? Waits until user stops triggering the action for a specified time.
  • Use case: Search inputs, filters, live suggestions.
  • React example using lodash:
import { debounce } from 'lodash';
import { useCallback } from 'react';

const Search = () => {
  const handleSearch = useCallback(
    debounce((query) => {
      fetch(`/api/search?q=${query}`);
    }, 500),
    []
  );

  return <input onChange={(e) => handleSearch(e.target.value)} />;
};

🧊 Throttling: Limit the Frequency

  • What is it? Ensures a function executes only once every X milliseconds.
  • Use case: Scroll, resize events.
  • React example:
import { throttle } from 'lodash';
import { useEffect } from 'react';

const ScrollTracker = () => {
  useEffect(() => {
    const handleScroll = throttle(() => {
      console.log('Scrolling...');
    }, 300);

    window.addEventListener('scroll', handleScroll);
    return () => window.removeEventListener('scroll', handleScroll);
  }, []);

  return <div>Scroll to see throttle in action</div>;
};

📦 Request Batching: Merge Multiple Requests

  • What is it? Combining multiple API requests into one to reduce network load.
  • Use case: GraphQL (native support), REST batching, React Query batching.
Example with Apollo Client (GraphQL):
import { ApolloClient, InMemoryCache } from '@apollo/client';
import { BatchHttpLink } from '@apollo/client/link/batch-http';

const client = new ApolloClient({
  cache: new InMemoryCache(),
  link: new BatchHttpLink({
    uri: '/graphql',
    batchMax: 5,
    batchInterval: 20,
  }),
});
Bonus: React Query’s batchCalls (manual grouping pattern)
const useBatchedData = () => {
  return useQuery(['batchedData'], async () => {
    const [a, b] = await Promise.all([
      fetch('/api/a').then(res => res.json()),
      fetch('/api/b').then(res => res.json())
    ]);
    return { a, b };
  });
};

💡 Key Takeaways:

  • Debounce for waiting until user stops typing.
  • Throttle for limiting frequent events like scroll/resize.
  • Batch requests to reduce server load.
  • Combine these with React Query, TanStack Router, or Apollo for smooth UX and blazing speed.

🧰 Tools & Libraries:

  • lodash (debounce/throttle)
  • use-debounce (React hook)
  • apollo-link-batch-http
  • tanstack/query
  • Custom batching for REST APIs

🔚 Conclusion:

Small tweaks like debouncing and throttling can make a huge difference in perceived performance and server costs. When paired with smart request batching, your app becomes more scalable, responsive, and production-ready.

In Part 5, we’ll cover Code Splitting, Dynamic Imports & Bundle Analysis — the next layer of optimizing performance at the build level.


This content originally appeared on DEV Community and was authored by Sachin Maurya