I explored how to speed up query processing in the database.

Vanshika Sabharwal2025-11-01

I was assigned to optimize the database to reduce query processing time so that user drop-offs decreased by 40% due as 3–5 second processing delays could be overcome.

I asked one of my seniors for a real-world optimization tip — no theory knowledge — and he told me to use indexes. I searched a bit and found that adding indexes is as simple as adding a few lines of code to my existing Prisma schema file and then running a migration to apply the changes to the actual database. I made a test-indexes.ts file as well, which shows the difference — adding indexes caused a drastic drop in query processing time by more than 50%.

But why are indexes so fast? They use a B-Tree data structure under the hood, which allows Postgres to search for the needed data with a time complexity of O(log⁡n). Without indexes, Postgres has to do a sequential scan over the whole table, which is O(n) — way slower, especially when dealing with lots of rows.

I have implemented using indexes as optimizations in my personal project, V-Wallet.

You can check out the GitHub repo here: https://github.com/VanshikaSabharwal/v-wallet.

Key Points

  • Redis caching is perfect for cutting down DB calls for frequently used data; it sits in memory, so you just grab info almost instantly.
  • Read replicas are copies of your main DB, and you can route all your read requests there — super useful for scaling reads and not overloading your main database.
  • Indexes help most with reads (SELECTs, JOINs, WHERE, etc.) but have a slight impact on write operations (INSERT/UPDATE/DELETE) since the indexes need to be updated too. This overhead is typically acceptable in most real-world scenarios, given the query performance benefits.
  • If you care about fast, reliable access, especially for users, these optimizations are proven and effective in real-world setups.