How the Tea App Got Hacked: Firebase Pitfalls and Lessons for Engineers



This content originally appeared on DEV Community and was authored by Uzair Saleem

Tea, a dating platform for women – recently became a symbol of security failure when misconfigured Firebase backends left thousands of private records exposed.

The recent Tea app breach wasn’t some esoteric exploit; it was a textbook case of bad architecture and misused Firebase services. In fact, security analysts noted this was “not a sophisticated hack; it was an unlocked front door”, it had nothing to do with “vibe coding” or AI – it was due to “horrible design decisions” like treating Firestore/Storage as an open-ended backend and skipping basic security rules

I’ll show you exactly what went wrong, from the open Firebase Storage bucket to the broken access controls in the chat API, and how client-side-only checks magnified the problem. I’ll show code/config examples of insecure vs secure Firebase rules, explain why you almost always need a proper server/API layer, and offer concrete takeaways so you never repeat Tea’s mistakes.

The dev team at tea treated Firebase like an instant backend platform but never implemented real security controls.

Firebase-as-DB without safeguards: By relying on Firestore/RealtimeDB and Storage directly from the client, Tea effectively let the frontend talk straight to the data. Any logic in the app (like “only show your own chats”) meant nothing if the database rules weren’t locked down. Security pundits note that connecting users directly to the database is risky, this is why we have the best practices in NEXT.js to use the Data Access Layer and add proper checks for the user authenticated state.
Tea simply left those controls wide open.

Legacy data left unprotected: The company admitted that a “legacy data” store (pre-Feb 2024) was never migrated or secured. That old Firebase bucket contained ID photos and comments. It should have been locked or deleted, but instead became a gaping hole. Outdated systems were left accessible with no oversight.

Security should’ve been part of “Done”: The Tea app was a runaway success (millions of users) built by a small team. In the rush, they seemingly skipped security audits. As one commentator notes, a single script checking Firebase permissions would have prevented the disaster.

What I learned is design your system with security by default, not as an afterthought.

Together, these points highlight how misusing Firebase/Firestore as an open backend – especially with default/test rules – let attackers walk right in.

Let’s break down the specific flaws that were exploited:
Public Firebase Storage Bucket – No Authentication: Tea stored user IDs, selfies, and images in a Google Firebase Storage bucket that required no auth tokens. In other words, anyone with the URL could list or download files.

Broken Access Controls(Exposed All chats): Separate from the images, Tea’s chat data ended up in another Firebase database (Firestore or RealtimeDB). Here too, the rules were botched. A researcher found that any authenticated user (with Tea’s API key built into the app) could query all chat messages – not just their own. In other words, the app had an Insecure Direct Object Reference: you could just ask for messages by ID or even listen to a whole collection. As detailed in a post-mortem, attackers discovered “an open Firestore or real-time database instance with 1.1 million private chat messages”, all readable with a standard API key

This is a classic broken access control: there was no server-side check enforcing “you can only read your own messages”. Instead, the client controlled nothing. With the leaked database keys or API key, anyone could pull everyone’s DMs.

Secrets in the Client – Keys and endpoints exposed: Compounding the above, Tea had placed critical information in the mobile app code itself. The Firebase project keys were visible in the app’s code, and the admin panel URL was public with no rate-limiting

Tea’s clients effectively had the skeleton key to their own database built in, so once the bucket was public and the rules wide open, the attackers had unrestricted access.

Client-side Security Checks – No server enforcement: Because the design leaned on Firebase, any access control was done (if at all) on the front-end. For example, the app might show or hide UI buttons, or check chat IDs in JavaScript. But with direct DB access, none of that mattered.

In Tea’s case, the app may have been “invite-only” or checked a user’s gender on the client, but the backend had no rule enforcing the same. This is a huge red flag, if your security depends on code running in the user’s browser or phone, it’s usually broken. All critical checks belong on a trusted backend.

In summary, the root technical failures were leaving Firebase services unlocked and trusting the client. The result, thousands of sensitive images and messages spilled out without a fight.

Insecure vs Secure Firebase Rules
A concrete way to understand Tea’s mistake is to compare insecure and secure Firebase rules. For example, an open Storage bucket rule in Firebase might look like this:

service firebase.storage {
match /b/{bucket}/o {
match /{allPaths=**} {
// This allows anyone to read or write ANY file
allow read, write;
}
}
}

This is essentially “test mode” – it grants worldwide read/write access with no checks. It’s equivalent to Google Drive’s “Anyone with link Reader”. Tea’s Storage bucket almost certainly had an override like this, or was left in default test mode

In contrast, a secure rule locks things down by requiring authentication and matching data paths. For example, if each user’s uploads were under user_uploads/{userId}/…, a better rule would be:

service firebase.storage {
match /b/{bucket}/o {
// Files under user_uploads must belong to the authenticated user.
match /user_uploads/{userId}/{fileName} {
allow read, write: if request.auth != null
&& request.auth.uid == userId;
}
}
}

Here, request.auth.uid is the logged-in user’s ID. We only allow access if it matches the {userId} folder. No authentication = no access. In this model, Alice can only read/write files in her own folder. Likewise, for Firestore/RealtimeDB data, you would avoid allow read, write: if true (which ignores auth) and instead write rules like:

{
"rules": {
"users": {
"$uid": {
".read": "$uid === auth.uid", // each user read their own node
".write": "$uid === auth.uid"
}
}
}
}

Such rules force the database to enforce user identity

A secure rule always checks request.auth != null and other conditions before allowing access. Always review your Firebase security rules — they are your last line of defense!
Why a Proper API Layer Matters
The Tea hack also highlights why relying on Firebase alone can be dangerous. In a classic server-based app, all data access goes through your own API, which enforces auth, roles, quotas, and logs every action. With Firebase, you gave up that layer and expected the database to gatekeep. That means your only security controls were the Firebase rules (and Google Cloud IAM) – and Tea’s were off. This is a big architectural gamble, modern tools like Firebase “encourage just that” (direct DB connections), meaning “apps implement detailed access control rules, but they become meaningless once the user connects directly to the database”

In Tea’s case, there was no intermediate server to log access, validate complex policies, or throttle abuse. If Tea had built a standard backend API (e.g. a Node/Express, Golang service) they could have implemented server-side access controls for chats and files, used middleware for authentication, and kept audit logs of every query. Even if someone leaked a database key, the backend could have rate-limited requests or blocked suspicious patterns. Instead, the Firebase “API” key in the client was all the attacker needed, and no central logic was there to intervene. In short: don’t treat Firebase like a drop-in replacement for your server. Use Cloud Firestore or Realtime DB as a supplement, not a substitute. For any serious app, still run a trusted server layer or cloud functions that validates who’s doing what. That way, even if a user manipulates the client, the server enforces the rules.

Lessons I Learned:

What practical steps should you take after reading about Tea?
Lock Down Your Database – Never leave a Firebase bucket or collection open. In testing, rules may allow public access, but before launch change them. Always require request.auth checks. Audit your rules early and often
Many cloud providers and CI tools can automatically scan for “allow if true” patterns or public S3/Firebase buckets and warn you.

Keep Secrets Secret – Don’t embed API keys or secrets in your mobile/JS code. Instead, restrict each key’s privileges. Tea’s keys were in the client, which let hackers use them freely. Put sensitive logic on the server and use secure channels for any keys.
Harden Admin Interfaces – Any web dashboard or admin portal should be behind strong auth. Add multi-factor auth (MFA) and rate-limit login attempts. Tea’s admin panel had no rate limiting, making brute force easy

MFA and IP whitelisting for admin login would block most attacks.
Delete What You Don’t Need – If you ask users for photos or IDs, have a clear retention policy. Don’t hoard old verification pictures in a “legacy” storage just because you think you might need them. Tea claimed “selfies were not deleted as expected,” violating their policy
Shred data as soon as it serves its purpose, especially sensitive PII.

Test from the Outside – Think like an attacker. Can you access your resources without logging in? Try connecting a generic Firestore client with your Firebase project ID to see if it bypasses auth.
Use automated tools (DAST scanners, Firebase rule simulators) as part of your pipeline.

Implement Monitoring & Alerts – In a DevOps mindset, monitor your cloud config. Tools like Google Cloud Asset Inventory or third-party config scanners can alert you if a bucket suddenly becomes public.

Likewise, watch for unusual API usage patterns. If Tea had config monitoring, the open bucket might have been flagged immediately.

Cultural and Organizational Takeaways

Beyond code, the Tea breach underscores a cultural pitfall, speed without guardrails. As one security expert notes, the Tea hack exemplifies how “speed, intuition, and improvisation” (“vibe coding”) lead to “fragile systems” and “catastrophic security failures”

This is crucial: when a project lacks code reviews or security specialists (common in small startups), easy mistakes slip through. Tea apparently left a legacy system running without anyone auditing it – a governance failure.

Finally, think like defenders: Had Tea’s DevOps pipeline run a Firebase rules scan or someone launched a quick penetration test, the open storage would’ve been caught. Automated code analysis (SAST) might not flag config, but cloud config scanners, IaC checks, and even simple automated scripts (as analysts pointed out) would have found “allow read, write” rules before release.

Conclusion
The Tea app hack was brutal and deeply avoidable. From open Firebase buckets to missing access checks, the vulnerabilities were the kind any diligent engineer could prevent. The moral: never deploy user data storage on autopilot. Always assume “if it’s possible, it will be done” by attackers – and write your rules accordingly. Use proper server-side auth where needed, lock down cloud storage, automate your security checks, and maintain a skeptic’s mindset about any “easy” cloud backend shortcut. By applying these lessons, developers can build consumer apps that truly respect privacy and security – preventing a repeat of Tea’s unfortunate lesson on the internet.

Treat backend-as-a-service with caution – always write tight security rules. Keep secrets off the client. Use proper APIs to enforce access. Automate scans and reviews in your pipeline. And never let “we’ll fix it later” creep into production code. Users trust you with their data; earning that trust requires diligence at every layer.
Keep coding.


This content originally appeared on DEV Community and was authored by Uzair Saleem