Quick To Spot API/System Slow



This content originally appeared on DEV Community and was authored by Ikhwan A Latif

I recently read Blink — the book about how experts can spot mistakes in seconds (sometimes milliseconds). Developers build the same intuition: after getting burned enough times, you just feel when code will be slow.

I keep getting hit by the same types of slow code, so here are the patterns I now spot instantly and how to fix them.

The N+1 Query

aah yes, the most thing i see when developing is this problem. This cause your app to slowdown significantly, why is that, you call a query inside a loop think about it you have one user and thousand of product. Each product have detail somewhere in the table, you loop the product each loop you query the detail product. For me this kind of situation is not a problem if user is only have like 5 product or list, but 1000 product you better ditch the code and start a new. The code i usually counter are like this:

const products = await Product.getAll(userId); // fetch all product

for let i = 0; i < products.length(); i++ {
    const product = products[i];
    const productDetail = await ProductDetail.getOne(product.id)
    // ...other logic
}

or

const products = await Product.getAll(userId); // fetch all product

products.map(async product => {
    const productDetail = await ProductDetail.getOne(product.id)
    // ...other logic
})

be careful async return promise if you put .map(async ()=> {}) it return array of promise you should wrap it in Promise.all()

This both code spark problem async inside loop which is fine BUT just like i said before 5 – 15 product it’s totally fine but if there 1000 product eeh. for me i usually tackle this situation like this code below:

const products = await Product.getAll(userId); // fetch all product
const allProductIds = products.map(x => x.id)
const allProductDetails = await ProductDetail.getAllBasedOnId(allProductIds)

for let i = 0; i < products.length(); i++ {
    const product = products[i];
    const productDetails = allProductDetails.include(product.id)
}

Well you can do above like that or you can convert the allProductDetails into Hash Table (or Object/Map if you want to said it) instead of array like

const products = await Product.getAll(userId);
const ids = products.map(p => p.id);
const allDetails = await ProductDetail.getAllByIds(ids);

// fast lookup
const detailsMap = new Map(allDetails.map(d => [d.product_id, d]));

for (const product of products) {
  const detail = detailsMap.get(product.id);
  // ...other logic
}

There are multiple way to handle it for example you can manipulate it with query the rest are logic.

const allProductDetails = await ProductDetail.getAll({
    join: {
         relation: Product,
         key: 'id',
         field: [],
         join: {
              relation: User,
              key: 'id',
              field: [],
              where: 'user = ' + userId,
         }
    },
})

for let i = 0; i < allProductDetails.length(); i++ {
    const productDetail = allProductDetails[i];
    // ... create a logic to prevent duplicate product
}

Note: the .getAll() is just an example

With code above, this save a lot of time because we preventing calling DB every single loop.

Index, Double Edge Sword

One of the most common places where performance tanks is when you start joining big tables. At first the query looks innocent:


SELECT * FROM product_history AS ph 
INNER JOIN product AS p ON ph.product_id = p.id

for small data it’s ok. but if the record is reaching 1M or even 100K, than you might consider add Index. And there are two way to add index one using FOREIGN KEY and Other with CREATE INDEX.

Adding Index With Foreign Key

# Add Foreign Key
ALTER TABLE product_history ADD FOREIGN KEY(product_id) REFERENCES product(id) ON DELETE [CASCADE | NO ACTION | RESTRICT | SET NULL] ON UPDATE [CASCADE | NO ACTION | RESTRICT | SET NULL]

ALTER/ADD FOREIGN KEY semantics vary by engine (SQLite often requires table rebuild). Check your DB docs.

Above example are adding Foreign Key into your table, and what does Foreign Key do, well it just for data integrity between two table that’s all it mean making sure data for product_history.product_id is exists in product.id. And if there some integrity is disturb by changes. It will take action it either restrict you from changing or turn to null or delete it when there’s some changes.

But here the bonus: in MySQL (or most relational database), when you declare foreign key, the database it will require you to create index or it will silently create you an index. So that means when you join two table later the database doesn’t have to scan row by row, it can use the index to jump straight to the matching rows.

Note: you can use EXPLAIN [Your Query] to get information about query performance. i might explain it in the future how to read the EXPLAIN result.

Adding an Index Directly

You can also add an index explicitly:


CREATE INDEX idx_product_history_product_id ON product_history(product_id [ASC | DESC])

This query above will create index with index named idx_product_history_product_id and index is located in product_history at column product_id and it sorted ASC / DESC depend which one you choose.

Let me tell you again if we only have like 100 to 10K record the query does not have a problem. But if you have like 500K or 1M data this simple query will became a bottleneck. And of course be careful when add index, adding too much index causing Writing Performance became slow. So add index when it’s necessary.

Note: if you have 1M or 500K Record and you add new index, the database will take it’s time to complete the operation. Because the nature behind index are they create a separate structure(B-tree) where stored differently and not visible for dev. During that operation, your writes may be blocked or slowed down. So earlier developer panic when they do this (always have other pair of eyes if you play with prod DB).

Use EXPLAIN to Verify

Use EXPLAIN to see whether your query uses an index. But be wary, each relational database (MySQL, PostgreSQL, SQLite, etc.) can have a very different query planner (SQL optimizer).

The planner is the brain for the database, it will decides whether to use an index, a sequential scan, or another strategy to get the fastest result. That’s why when you try on different Database with the same query it can look very different depending on the database:

  • MySQL often prefers indexes aggressively. You’ll see things like eq_ref or ref in the EXPLAIN output when the index is used.

  • PostgreSQL is cost-based. It means if scanning a whole table is cheaper it will happily choose full scan table, even index is exists.

  • SQLite also has a planner, but it works closer to Postgres, if your query isn’t selective, it may ignore the index.

So, don’t panic if you see different results across databases. The optimizer is just making trade-offs for you

you can refer to their documentation for more clarification.

SELECT, Death by a Thousand Bytes

Let’s continue with a same example:


SELECT * FROM product_history AS ph 
INNER JOIN product AS p ON ph.product_id = p.id

did you see something wrong?, the asterisk symbol (*) is wrong, you should never use it. What it trying to do is fetch all column. Imagine you have 20 column and 10 column data type are TEXT or MEDIUMTEXT even even worse JSON. ugh… the pain.

for reference:

  • TEXT max are ~65 KB (65,535 Bytes)
  • MEDIUMTEXT max are ~16MB (16,777,215 Bytes)
  • JSON depends (in few databases has a rule on how JSON is implemented in their database)

Storage Requirement MySQL here

Now let’s play with numbers:

  • Say you fetch 100,000 rows, each with a TEXT column.
  • Worst case, that’s 100,000 × 65,535 ≈ 6.5 GB of raw data.
  • And of course, the databases apply compression and optimizations, but you get the idea, the memory footprint can blow up so fast.

The SQL will use a lot of memory just to process the SELECT query, if you combine with previous problem the missing index, no wonder the fetch it slow.

First of all when you try to create table, please consider what kind of type you want to use don’t just put a large data type in there, if it’s UUID just put like CHAR(36) or something close, just don’t put CHAR(200) in there just because you want to scale it for the future. As a software developer you solve current facing problem, for now think about how you can solve those quickly.

how to fix it, well call the column properly:


SELECT ph.product_id, p.product_name, ph.activity 
FROM product_history AS ph 
INNER JOIN product AS p ON ph.product_id = p.id

It convenient to type asterisk (*), but don’t turn that into normal behavior, Just use necessary column.

Loops The good and The bad (JS)

The most important piece of code in every language. You can’t avoid it, if you ever learn Time Complexity it always involve loop and total of data.

But before i explain more further if your not a developer that bound your soul to JavaScript, you can skip this part.

Ok Loop, in javascript there 3 types of loop and that is:

for

My favorite kinda of loop, a simple straightforward and no surprise it will do what you tell

// i can do this
for (let i = 0; i < 10; i++) {}

// i can do this too
for (let i = 10; i >= 0; i--) {}

// even this
for (let i = 0; i < 10; i*=2) {}

// heck even this
for (let i = Math.E; i < Math.pow(Math.PI, 20); i += Math.pow(i, 2)) {}

Endless possibility, the only thing that hold back is your imagination. Straight forward. This classic loop leaving little memory footprint so it ease the pain for the garbage collector in JavaScript. And of course since it close to bare metal or CPU, the engine will optimize it.

for (const item of items)


for (let i = 0; i < arr.length; i++) { /* ... */ }

Not my favorite loop, not hating it either

  1. It used Iterator protocol under the hood (it something like calling method next() that return {value, done} if done true the loop stop).
  2. JavaScript won’t optimize it since you create a different kind of loop

Still, for moderate data sets it’s perfectly fine and often more readable.

for more information about Iterator here

arr.forEach(callback)

Hated it. Why? God damn slow. can’t even handle 1K record. And many people using it (because it’s clean), i can’t even stop it using break. You have throw an error exception to break it. And why i hate it:

  1. It allocates a callback function for every iteration. More functions = more garbage for the collector.
  2. You can’t stop the loop with break, you have throw an error Exception to stop.
  3. Nesting forEach calls is a quick way to create GC hell.
  4. Optimizer probably give up seeing this forEach.

forEach is fine for examples, but in performance-sensitive code you should change it.

Conclusion

You don’t know how many time i get hit by that above problem, whenever i get a complain from client that the feature report is slow. It always those four problem. Well there is the time where i had to find a creative way to solve the problem, but if you get hit by performance problem always look for these four first. It save you a lot of time, here the step by step (by me) to handle performance problem:

  1. Look at how much we play
  2. Look at how much table you join
  3. Look at the code, does it have N+1 Problem?
  4. Look at the Select Query
  5. Look at the Index
  6. Look at Query Performance Report using EXPLAIN
  7. Then you can change the code to different logic

As i was wrote above, this is just my way to measure / benchmark. But of course the cause the app performance became slow is how you access the data, how much data you play with, and how do you store those data.

As always Measure first and fix it.


This content originally appeared on DEV Community and was authored by Ikhwan A Latif