How I Migrated 182k Users to Supabase



This content originally appeared on Level Up Coding – Medium and was authored by Tarik

How I Migrated 182,000 Users to Supabase

From putting it off for months to migrating 182k users — how I finally moved my app to Supabase.

Migrating 182,000 users to a new backend is no small feat — especially when you’re a solo mobile dev juggling a full-time job. But I pulled it off. I transitioned my app, OSM Tactics, from a tangled NestJS + MongoDB setup to Supabase, the hot topic in backend circles right now. In this article, I’ll walk you through the journey: why I chose Supabase, the challenges I faced, and how I pulled off the migration.

Why Supabase?

When my app’s backend started feeling old, I looked around for something better, and Supabase caught my eye. It’s this open-source thing that’s like Firebase but simpler to jump into. It gives you a PostgreSQL database with an easy API, real-time stuff, and login features all set up. For me, a guy who builds mobile apps, that’s perfect — I don’t want to mess with complicated server junk. You can get a Supabase project running in like five minutes, and the docs are super straightforward, so I didn’t feel lost even though I’m not a backend pro.

Supabase isn’t just easy, though — it’s getting big. It’s not as huge as Firebase yet, but tons of developers are using it, and the community is growing fast. It’s got everything I need: a database to store stuff, login tools, a place to upload files, and even little functions I can run without a full server. Firebase hides how it works, but Supabase uses PostgreSQL, which is this old, reliable database tons of people trust. That means I can change things how I want and make it bigger later without getting stuck. Plus, there’s a big group of developers online sharing tips, tricks, and fixes on places like GitHub. I knew if I got stuck, I could Google it or ask someone. For a solo coder like me who’s short on time, Supabase felt like the right pick — it’s simple but strong enough to handle my app.

The Old Setup Before the Big Switch

Before I jumped to Supabase, let’s talk about what I was using. My app, OSM Tactics, is this mobile game for Android and iOS where people plan out tactic strategies — it’s pretty cool if you’re into that. The backend was something I built myself with NestJS, which is a framework for Node.js that I liked because it works with TypeScript and keeps things organized. It talked to MongoDB, a database where I kept all the user info, game data, and other bits. I’d been running this setup for almost four years.

The Latest snapshot of mongodb before migration, Osm Tactics aka “Osm Guide”

But here’s the problem: I’m just one guy with a full time job, and keeping this backend alive was killing me. At first, NestJS and MongoDB were fine — I could handle them. But as more users joined, it got messy. I had to spend way too much time making sure the server didn’t crash, fixing security holes, and keeping it fast enough. Security was the worst part — I know some Node.js, but I’m a mobile app guy, not a backend expert. It still worked, kind of, but it was like an old car that kept breaking down. I couldn’t keep patching it forever — I needed something new that didn’t make me want to pull my hair out.

The Leap: Deciding to Migrate

Deciding to move everything wasn’t quick — it took forever because I kept putting it off. OSM Tactics isn’t some giant app with tons of features, but 182k users is a lot of data to deal with. Going from MongoDB, where data is all loose and free, to Supabase’s PostgreSQL, where everything’s in neat tables, sounded hard. I’d have to redo how the data was set up and make sure I didn’t lose anything important. The old NestJS-MongoDB combo was still running, but I had this bad feeling it’d blow up soon. That worry kept bugging me until I finally said, “Okay, let’s do it.”

Once I decided, I had to plan it out big time. I sat down and figured out how to turn my MongoDB stuff into Supabase tables — like drawing a map for where everything would go. Step one was getting all my data out of MongoDB safely, so I wrote a little script to dump it into JSON files. Here’s that code — it’s basic but did the job:

async function exportCollection(collectionName, outputFileName, query = {}) {
try {
await mongoose.connect(mongoURI);

const Model = mongoose.model(
collectionName,
genericSchema,
collectionName.toLowerCase(),
);

const documents = await Model.find(query).lean();

const outputPath = path.join(__dirname, outputFileName);
fs.writeFileSync(outputPath, JSON.stringify(documents, null, 2));

await mongoose.disconnect();

return documents;
} catch (error) {
if (mongoose.connection.readyState !== 0) {
await mongoose.disconnect();
}
}
}

This script hooked up to MongoDB, pulled out all the data, and saved it as JSON files on my computer. It was like a backup — I could mess around without losing everything if I screwed up. I ran it for every collection, checked the files to make sure they looked right, and felt a bit better knowing I had copies. Still, I dragged my feet starting the real work — I’d stare at the screen, sip coffee, and think, “Maybe tomorrow.” It took a big push to stop slacking and actually get into it, but once I did, I was all in.

Migrating Users: The Password Puzzle

Here’s where it got real hairy: moving 182k users and keeping their passwords working. Users expect to log in with the same email and password after I update the app — that’s just how it should be. But doing it was a nightmare. The cool thing? Supabase uses bcrypt for passwords, same as MongoDB, so I didn’t have to redo them all or make users pick new ones. That would’ve been terrible experience — people hate resetting passwords, and I’d lose a bunch of them.

Dealing with Supabase’s Picky Auth Table

Supabase has this auth.users table for logins, and it’s super strict. It’s got all these fields you have to fill out just right, or it freaks out — like if you put an empty string instead of null, it breaks. I found some errors other people hit in this GitHub thread, and it scared me straight. I had to match it perfectly. Here’s what a user looked like for Supabase (I didn’t use phone stuff, so those are null — change it if you do):

const supabaseUser = {
id: uid,
email: user.email,
encrypted_password: user.password || null,
instance_id: '00000000–0000–0000–0000–000000000000',
aud: 'authenticated',
role: 'authenticated',
email_confirmed_at: user.createdAt
? new Date(user.createdAt).toISOString()
: new Date().toISOString(),
created_at: user.createdAt
? new Date(user.createdAt).toISOString()
: new Date().toISOString(),
updated_at: user.updatedAt ? new Date(user.updatedAt).toISOString() : null,
last_sign_in_at: null,
phone: user.phone || null,
phone_confirmed_at: null,
confirmed_at: user.createdAt
? new Date(user.createdAt).toISOString()
: new Date().toISOString(),
invited_at: null,
confirmation_token: '',
confirmation_sent_at: null,
recovery_token: '',
recovery_sent_at: null,
email_change_token_new: '',
email_change: '',
email_change_sent_at: null,
email_change_token_current: '',
email_change_confirm_status: 0,
banned_until: null,
reauthentication_token: '',
reauthentication_sent_at: null,
is_super_admin: null,
phone_change: '',
phone_change_token: '',
phone_change_sent_at: null,
is_sso_user: false,
deleted_at: null,
is_anonymous: false,
raw_user_meta_data: JSON.stringify(rawUserMetaData),
raw_app_meta_data: JSON.stringify(rawAppMetaData),
};

I had to get every single field right or Supabase wouldn’t take it. To move all my MongoDB users, I wrote a bigger script that grabbed them, turned them into this format, and saved them as a CSV file. Here’s that script — it’s long but shows the whole process:

const fs = require('fs');
const path = require('path');
const { parse } = require('json2csv');
const { v4: uuidv4 } = require('uuid');
const mongoose = require('mongoose');
const mongoURI = 'your-connection-string';

// Define a generic schema for flexible document structure
const userSchema = new mongoose.Schema({}, { strict: false });
async function fetchUsersFromMongoDB() {
try {
console.log('Connecting to MongoDB…');
await mongoose.connect(mongoURI);
console.log('Connected to MongoDB');
const UserModel = mongoose.model('Users', userSchema, 'users');
console.log('Fetching users from MongoDB…');
const users = await UserModel.find({}).lean();
console.log(`Found ${users.length} users in MongoDB`);
await mongoose.disconnect();
console.log('Disconnected from MongoDB');
return users;
} catch (error) {
console.error('Error fetching users from MongoDB:', error);
if (mongoose.connection.readyState !== 0) {
await mongoose.disconnect();
}
throw error;
}
}

async function processUsers() {
console.time('Total processing time');
const mongoUsersFilePath = path.join(__dirname, 'mongo_users.json');
try {
// Step 1: Fetch users from MongoDB
const users = await fetchUsersFromMongoDB();
// Step 2: Write users to a fresh JSON file
fs.writeFileSync(mongoUsersFilePath, JSON.stringify(users, null, 2));
console.log(`Fresh users data written to: ${mongoUsersFilePath}`);
// Step 3: Read the fresh users file
const usersData = JSON.parse(fs.readFileSync(mongoUsersFilePath, 'utf8'));
console.log(`Read ${usersData.length} users from fresh file`);
// Step 4: Filter users to include only those with an email or at least one provider
const filteredUsers = usersData.filter((user) => {
return user.email || user.googleId || user.appleId;
});
console.log(`Filtered users: ${filteredUsers.length}`);
// Create a Set to track used IDs and ensure uniqueness
const usedIds = new Set();
const usedEmails = new Set();
const duplicateEmails = [];
// Step 5: Transform the data to match Supabase Auth users structure
const transformedUsers = [];
filteredUsers.forEach((user) => {
if (!user.email) {
console.log('Skipping user without email:', user._id);
return;
}
// Check for duplicate email
const lowerEmail = user.email.toLowerCase();
if (usedEmails.has(lowerEmail)) {
duplicateEmails.push({
email: user.email,
id: user._id,
googleId: user.googleId,
appleId: user.appleId,
});
console.warn(`Duplicate email detected and skipped: ${user.email}`);
return;
}
// Mark this email as used
usedEmails.add(lowerEmail);
let uid = uuidv4();
while (usedIds.has(uid)) {
console.warn('Duplicate ID detected:', uid);
uid = uuidv4();
}
usedIds.add(uid);
const rawUserMetaData = {
sub: uid,
email: user.email,
email_verified: true,
phone_verified: false,
};
const providers = [];
// Check for Google ID
if (user.googleId) {
providers.push('google');
rawUserMetaData.iss = 'https://accounts.google.com';
rawUserMetaData.sub = user.googleId;
rawUserMetaData.provider_id = user.googleId;
}
// Check for Apple ID
if (user.appleId) {
providers.push('apple'); // Only override iss and sub if Google wasn't already set
if (!user.googleId) {
rawUserMetaData.iss = 'https://appleid.apple.com';
rawUserMetaData.sub = user.appleId;
rawUserMetaData.provider_id = user.appleId;
}
}
// If no external providers, use email
if (providers.length === 0) {
providers.push('email');
}
// Set the raw app metadata
const rawAppMetaData = {
provider: providers[0], // Primary provider is the first one
providers: providers, // All providers in an array
};
transformedUsers.push({
id: uid,
email: user.email || null,
encrypted_password: user.password,
instance_id: '00000000–0000–0000–0000–000000000000',
aud: 'authenticated',
role: 'authenticated',
invited_at: null,
confirmation_token: '',
confirmation_sent_at: null,
recovery_token: '',
recovery_sent_at: null,
email_change_token_new: '',
email_change: '',
email_change_sent_at: null,
last_sign_in_at: new Date().toISOString(),
is_super_admin: null,
phone: '',
phone_confirmed_at: null,
phone_change: '',
phone_change_token: '',
phone_change_sent_at: null,
confirmed_at: new Date().toISOString(),
email_change_token_current: '',
email_change_confirm_status: 0,
banned_until: null,
reauthentication_token: '',
reauthentication_sent_at: null,
is_sso_user: false,
deleted_at: null,
is_anonymous: false,
created_at: user.createdAt
? new Date(user.createdAt).toISOString()
: new Date().toISOString(),
updated_at: null,
email_confirmed_at: user.createdAt
? new Date(user.createdAt).toISOString()
: new Date().toISOString(),
phone: user.phone || null,
phone_confirmed_at: null,
raw_user_meta_data: rawUserMetaData,
raw_app_meta_data: rawAppMetaData,
});
});
// Step 6: Convert the transformed data to CSV
const csvFields = [
'id',
'email',
'encrypted_password',
'instance_id',
'aud',
'role',
'email_confirmed_at',
'invited_at',
'confirmation_token',
'confirmation_sent_at',
'recovery_token',
'recovery_sent_at',
'email_change_token_new',
'email_change',
'email_change_sent_at',
'last_sign_in_at',
'raw_app_meta_data',
'raw_user_meta_data',
'is_super_admin',
'created_at',
'updated_at',
'phone',
'phone_confirmed_at',
'phone_change',
'phone_change_token',
'phone_change_sent_at',
'confirmed_at',
'email_change_token_current',
'email_change_confirm_status',
'banned_until',
'reauthentication_token',
'reauthentication_sent_at',
'is_sso_user',
'deleted_at',
'is_anonymous',
];
// Write duplicate emails to a file for reference
if (duplicateEmails.length > 0) {
fs.writeFileSync(
path.join(__dirname, 'duplicate_emails.json'),
JSON.stringify(duplicateEmails, null, 2),
);
console.log('Duplicate emails written to duplicate_emails.json');
}
const csv = parse(transformedUsers, { fields: csvFields });
// Step 7: Write the CSV to a file
const csvFilePath = path.join(__dirname, 'supabase_users.csv');
fs.writeFileSync(csvFilePath, csv);
console.log('CSV file has been generated:', csvFilePath);
// Step 8: Delete the temporary fresh users file
if (fs.existsSync(mongoUsersFilePath)) {
fs.unlinkSync(mongoUsersFilePath);
console.log(`Temporary file deleted: ${mongoUsersFilePath}`);
}
console.timeEnd('Total processing time');
} catch (error) {
console.error('Error processing users:', error);
} finally {
if (fs.existsSync(mongoUsersFilePath)) {
try {
fs.unlinkSync(mongoUsersFilePath);
console.log(`Temporary file deleted: ${mongoUsersFilePath}`);
} catch (deleteError) {
console.error(`Error deleting temporary file: ${deleteError.message}`);
}
}
}
}

After I had that CSV, I made a fake table in Supabase that looked like auth.users. The real table’s setup is this long SQL thing you can see in Supabase’s editor under schema auth > users > Definition:

create table auth.users (
instance_id uuid null,
id uuid not null,
aud character varying(255) null,
role character varying(255) null,
email character varying(255) null,
encrypted_password character varying(255) null,
email_confirmed_at timestamp with time zone null,
invited_at timestamp with time zone null,
confirmation_token character varying(255) null,
confirmation_sent_at timestamp with time zone null,
recovery_token character varying(255) null,
recovery_sent_at timestamp with time zone null,
email_change_token_new character varying(255) null,
email_change character varying(255) null,
email_change_sent_at timestamp with time zone null,
last_sign_in_at timestamp with time zone null,
raw_app_meta_data jsonb null,
raw_user_meta_data jsonb null,
is_super_admin boolean null,
created_at timestamp with time zone null,
updated_at timestamp with time zone null,
phone text null default null::character varying,
phone_confirmed_at timestamp with time zone null,
phone_change text null default ''::character varying,
phone_change_token character varying(255) null default ''::character varying,
phone_change_sent_at timestamp with time zone null,
confirmed_at timestamp with time zone GENERATED ALWAYS as (LEAST(email_confirmed_at, phone_confirmed_at)) STORED null,
email_change_token_current character varying(255) null default ''::character varying,
email_change_confirm_status smallint null default 0,
banned_until timestamp with time zone null,
reauthentication_token character varying(255) null default ''::character varying,
reauthentication_sent_at timestamp with time zone null,
is_sso_user boolean not null default false,
deleted_at timestamp with time zone null,
is_anonymous boolean not null default false,
constraint users_pkey primary key (id),
constraint unique_email unique (email),
constraint users_phone_key unique (phone),
constraint users_email_change_confirm_status_check check (
(
(email_change_confirm_status >= 0)
and (email_change_confirm_status <= 2)
)
)
) TABLESPACE pg_default;

create unique INDEX IF not exists confirmation_token_idx on auth.users using btree (confirmation_token) TABLESPACE pg_default
where
((confirmation_token)::text !~ '^[0-9 ]*$'::text);

create unique INDEX IF not exists email_change_token_current_idx on auth.users using btree (email_change_token_current) TABLESPACE pg_default
where
(
(email_change_token_current)::text !~ '^[0-9 ]*$'::text
);

create unique INDEX IF not exists email_change_token_new_idx on auth.users using btree (email_change_token_new) TABLESPACE pg_default
where
(
(email_change_token_new)::text !~ '^[0-9 ]*$'::text
);

create unique INDEX IF not exists reauthentication_token_idx on auth.users using btree (reauthentication_token) TABLESPACE pg_default
where
(
(reauthentication_token)::text !~ '^[0-9 ]*$'::text
);

create unique INDEX IF not exists recovery_token_idx on auth.users using btree (recovery_token) TABLESPACE pg_default
where
((recovery_token)::text !~ '^[0-9 ]*$'::text);

create unique INDEX IF not exists users_email_partial_key on auth.users using btree (email) TABLESPACE pg_default
where
(is_sso_user = false);

create index IF not exists users_instance_id_email_idx on auth.users using btree (instance_id, lower((email)::text)) TABLESPACE pg_default;

create index IF not exists users_instance_id_idx on auth.users using btree (instance_id) TABLESPACE pg_default;

create index IF not exists users_is_anonymous_idx on auth.users using btree (is_anonymous) TABLESPACE pg_default;

I copied that setup for my temp table, uploaded the CSV into it, and then used this script to move users into the real auth.users table a thousand at a time:

DO $$
DECLARE
batch_size INT := 1000; -- Adjust batch size as needed
records_inserted INT;
BEGIN
LOOP
INSERT INTO auth.users (
id,
email,
encrypted_password,
instance_id,
aud,
role,
email_confirmed_at,
invited_at,
confirmation_token,
confirmation_sent_at,
recovery_token,
recovery_sent_at,
email_change_token_new,
email_change,
email_change_sent_at,
last_sign_in_at,
raw_app_meta_data,
raw_user_meta_data,
is_super_admin,
created_at,
updated_at,
phone,
phone_confirmed_at,
phone_change,
phone_change_token,
phone_change_sent_at,
email_change_token_current,
email_change_confirm_status,
banned_until,
reauthentication_token,
reauthentication_sent_at,
is_sso_user,
deleted_at,
is_anonymous
)
SELECT
id,
email,
encrypted_password,
COALESCE(instance_id, '00000000-0000-0000-0000-000000000000'),
COALESCE(aud, 'authenticated'),
COALESCE(NULLIF(role, ''), 'authenticated'),
COALESCE(email_confirmed_at, NOW()),
COALESCE(invited_at, NULL),
COALESCE(confirmation_token, ''),
COALESCE(confirmation_sent_at, NULL),
COALESCE(recovery_token, ''),
COALESCE(recovery_sent_at, NULL),
COALESCE(email_change_token_new, ''),
COALESCE(email_change, ''),
COALESCE(email_change_sent_at, NULL),
COALESCE(last_sign_in_at, NOW()),
raw_app_meta_data, -- Use value from test_users table
raw_user_meta_data, -- Use value from test_users table
COALESCE(is_super_admin, NULL),
COALESCE(created_at, NOW()),
COALESCE(updated_at, NOW()),
COALESCE(phone, NULL),
COALESCE(phone_confirmed_at, NULL),
COALESCE(phone_change, ''),
COALESCE(phone_change_token, ''),
COALESCE(phone_change_sent_at, NULL),
COALESCE(email_change_token_current, ''),
COALESCE(email_change_confirm_status, 0),
COALESCE(banned_until, NULL),
COALESCE(reauthentication_token, ''),
COALESCE(reauthentication_sent_at, NULL),
COALESCE(is_sso_user, false),
COALESCE(deleted_at, NULL),
COALESCE(is_anonymous, false)
FROM public.test_users
LIMIT batch_size
ON CONFLICT (email) DO NOTHING; -- Ignore duplicates

GET DIAGNOSTICS records_inserted = ROW_COUNT;

-- Exit the loop if no more records are inserted
IF records_inserted = 0 THEN
EXIT;
END IF;
END LOOP;
END $$;

This way, I could see every step — exporting, changing, loading. It took longer than some fancy tool might’ve, but I liked knowing exactly what was happening. I’d check the temp table after loading the CSV, spot-check a few rows, and then watch the batch script run to make sure it didn’t choke.

No Room for Custom Fields

Supabase’s auth.users table doesn’t let you add your own fields, which threw me off. In MongoDB, I’d just stick a username field in there, no big deal. But Supabase says no — you’re stuck with what it gives you. I thought about shoving extra stuff into raw_user_meta_data, which is like a JSON bag, but that’s messy for searching later. Instead, I made a new profiles table that connects to auth.users with the user ID. It holds things like username and locale, keeping it all tidy.

Also see this discussion which might be helpful: https://github.com/orgs/supabase/discussions/3491

Setting this up meant more work — I had to pull those extra fields from MongoDB, match them to the right users, and load them into the new table. I wrote another script to grab that data, turned it into a CSV, and uploaded it to Supabase after the auth stuff was done. It’s a better setup now because I can add more fields later without breaking anything, but it made the migration take longer since I had two tables to deal with instead of one.

Migration of Google/Apple Sign-Ins

Google and Apple logins were a pain too. In MongoDB, I had googleId and appleId fields, and some users had both tied to one email. Supabase handles these with special fields in raw_user_meta_data like iss, sub, and provider_id.

Here’s how I set it up:

const uid = uuidv4();
const providers = [];
// Build raw user metadata
const rawUserMetaData = {
sub: uid,
email: user.email,
email_verified: true,
phone_verified: false,
};
// Check for Google ID
if (user.googleId) {
providers.push('google');
rawUserMetaData.iss = 'https://accounts.google.com';
rawUserMetaData.sub = user.googleId;
rawUserMetaData.provider_id = user.googleId;
}
// Check for Apple ID
if (user.appleId) {
providers.push('apple');
if (!user.googleId) {
rawUserMetaData.iss = 'https://appleid.apple.com';
rawUserMetaData.sub = user.appleId;
rawUserMetaData.provider_id = user.appleId;
}
}
// If no external providers, use email
if (providers.length === 0) {
providers.push('email');
}

In the app, I updated the sign-in flows to use Supabase’s APIs. For Apple:

import * as AppleAuthentication from 'expo-apple-authentication';
import { supabase } from 'your-supabase-client';

const credential = await AppleAuthentication.signInAsync({
requestedScopes: [
AppleAuthentication.AppleAuthenticationScope.FULL_NAME,
AppleAuthentication.AppleAuthenticationScope.EMAIL,
],
});
if (credential.identityToken) {
const { data, error } = await supabase.auth.signInWithIdToken({
provider: 'apple',
token: credential.identityToken,
});
}

And for Google:

import { GoogleSignin } from '@react-native-google-signin/google-signin';
import { supabase } from 'your-supabase-client';

const googleUser = await GoogleSignin.signIn();

if (googleUser.idToken) {
const { data, error } = await supabase.auth.signInWithIdToken({
provider: 'google',
token: googleUser.idToken,
});
}

It worked, but figuring out users with both Google and Apple was tricky — I had to pick one as the main ID and list the other as extra. I tested it a bunch in the app to make sure nobody got locked out.

Caveat: CSV Export Potential Issues in Supabase
As of today (29/03/2025), exporting table data as CSV might be unreliable — so keep that in mind when comparing data between Supabase and your original source.

Generating TypeScript Types

Supabase has this awesome trick where it makes TypeScript types for you. You run one command, and it spits out files that tell your code exactly what your database looks like. For my React Native and Expo app, that was huge — I didn’t have to guess what data I’d get or write my own types. It cut down on bugs big time because TypeScript yelled at me if I messed up. Before, I’d spend hours fixing dumb mistakes; now, it’s way faster.

I ran the command after setting up my tables, got the types file, and plugged it into my app. Suddenly, stuff like user.email or profile.username just worked — no more wondering if I spelled it right. It’s a small thing, but it made coding the app feel less like a guessing game and more like a win.

Making Database Functions Do the Work

Supabase lets you write little SQL functions that run in the database, which is super handy. Instead of doing a bunch of app-side code to change stuff, I let the database handle it.

Here’s one I made to add credits to a user’s profile:


declare
current_credits int;
new_credits int;
begin
-- Get current credits from user_profiles table
select assistant_credits
into current_credits
from public.user_profiles
where public.user_profiles.user_id = increase_assistant_credits.user_id;

if current_credits is null then
current_credits := 0;
end if;

new_credits := current_credits + credits_to_add;

-- Update assistant_credits in user_profiles table
update public.user_profiles
set assistant_credits = new_credits
where public.user_profiles.user_id = increase_assistant_credits.user_id;

return new_credits;
end;

This grabs the user’s credits, adds some, and saves it back — all in one go. Without it, I’d have to fetch the credits, add them in the app, and send them back, which is more code and more chances to mess up. I wrote a few of these for other tasks too, and it made my app feel snappier since the database did the heavy lifting.

The Backend Caveat

Here’s a twist: Supabase didn’t get rid of my backend completely. To use some admin login stuff, you need this service_role key, but you can’t put it in the app — it’s not safe. So, I kept a small NestJS server to handle those secret jobs. It’s not as big as before, but it’s there to keep things locked down. Supabase does most of the work, so it’s still way easier than my old setup.

Conclusion: Worth the Struggle

Migrating 182,000 users to Supabase wasn’t just a technical challenge — it was a mental one too. Between procrastination, late nights, and plenty of second-guessing, there were moments I genuinely thought I might break everything. It was a slow grind, full of stress and hesitation. But leaving behind the brittle NestJS + MongoDB setup and moving to something cleaner and more maintainable was absolutely worth it.

Supabase gave me the simplicity, flexibility, and community support I needed as a solo dev juggling a full-time job. If you’re thinking about migrating your own app’s backend, just know it’s totally doable. It won’t be quick or painless — but with a bit of planning (and probably a few long nights), you’ll come out on the other side with a setup you can actually trust.


How I Migrated 182k Users to Supabase was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.


This content originally appeared on Level Up Coding – Medium and was authored by Tarik