Why Teams Migrate from Firebase to Supabase
Firebase's strength is its speed to first deployment. Firestore is flexible, the SDK is simple, and Google's infrastructure is reliable. Teams love Firebase for prototypes and early MVPs because you can ship a working app with auth, a database, and file storage in a weekend. But the same flexibility that makes Firebase fast to start with creates increasingly expensive problems over time.
The most common triggers for migration: complex relational queries that Firestore cannot express without data duplication; cost spikes as read/write operations scale (Firestore charges per document read, not per query); the impossibility of running JOIN-style queries across collections; and the lock-in anxiety of having your entire data model in a proprietary format that cannot be accessed with standard tools.
Supabase addresses every one of these. It is PostgreSQL — the most widely used open-source relational database in the world. Every SQL query tool, ORM, analytics platform, and BI tool works with it natively. Row-level security is enforced at the database level, not in application code. And the pricing model (based on storage and compute, not per-operation) scales far more predictably at high volumes. See the full comparison at Supabase vs Firebase.
Understanding the Architecture Difference
The most important thing to internalize before starting this migration is that you are not just switching databases — you are switching data paradigms. Firestore is a document store. Supabase is relational. A Firestore collection of user profiles with embedded purchase history is fundamentally different from the Supabase equivalent: a profiles table and a separate purchases table linked by a foreign key.
This means the migration is not a 1:1 data copy. It is a data model redesign. Every Firestore collection needs to be analyzed: what data belongs in its own table? What was embedded to avoid Firestore's join limitations but is actually a separate entity? This redesign work is where most of the migration time goes — and it is where the long-term benefit comes from.
A useful heuristic: if a Firestore document contains an array of objects (e.g., a user document with an embedded array of orders), that array almost certainly belongs in a separate Postgres table. If a document has simple scalar fields with no nested objects, it maps cleanly to a single Postgres row.
Step 1 — Export Your Firestore Data
Firebase provides a managed export via the Firebase console (Project Settings → Cloud Firestore → Export) or through the Firebase CLI: firebase firestore:export gs://your-bucket/export. The export produces BSON-encoded files that need to be converted to JSON for inspection and CSV for import into Postgres.
For smaller datasets (under 100,000 documents), the simplest approach is a Node.js export script using the Firebase Admin SDK that iterates every collection, converts documents to JSON objects, and writes them to CSV files — one file per collection. For larger datasets, use the managed GCS export and process the BSON files with the official Firestore import tools.
During export, take note of your collection structure, document ID formats (auto-generated vs. user-defined), and any sub-collections. Sub-collections in Firestore are often the clearest indicator of a relationship that should become a foreign key in Postgres. A users/{userId}/orders sub-collection becomes an orders table with a user_id UUID foreign key referencing the users table.
Step 2 — Design Your Supabase Schema
With your exported data in hand, design the Postgres schema before importing anything. Create a table for each Firestore collection, define column types precisely (Firestore's loose typing means you need to enforce types in Postgres), and set up foreign key relationships between tables that were previously embedded or in sub-collections.
Key Postgres features that Firestore lacks and you should use: UUID primary keys (Supabase generates these by default), timestamptz for all datetime fields, JSONB columns for genuinely schemaless data that does not fit a relational model, and enum types for status fields (e.g., CREATE TYPE order_status AS ENUM ('pending', 'paid', 'shipped', 'cancelled')).
Set up Row Level Security (RLS) policies from the start, before importing data. Define what each user role can SELECT, INSERT, UPDATE, and DELETE. The most common pattern: CREATE POLICY "users can read own data" ON profiles FOR SELECT USING (auth.uid() = id). Supabase's RLS editor makes this visual if you prefer not to write SQL manually.
Step 3 — Migrate Authentication
Authentication is the most technically complex part of a Firebase to Supabase migration. Firebase does not export password hashes — this is a deliberate security decision, and it means you cannot silently migrate users with no action on their part. You have two options.
Option A — Parallel systems with graceful migration. Keep Firebase Auth running for existing users. When an existing user logs in to your new Supabase-powered app, authenticate them against Firebase, immediately create a Supabase Auth account for them (using Supabase's admin API to create users without email confirmation), link the Firebase UID to the new Supabase UUID in a mapping table, and from that point forward use Supabase Auth. After 60–90 days, the vast majority of active users will have been migrated. Disable Firebase Auth for new sign-ups from day one of the migration.
Option B — Forced reset with notification. Export all user emails from Firebase, bulk-create Supabase Auth accounts, and send all users a "we have upgraded our platform — click here to set a new password" email. Simpler technically, but it requires coordinated communication and some users will not complete the reset, effectively losing access. Only use this for products with highly engaged user bases.
Social OAuth (Google, GitHub, Apple sign-in) migrates cleanly — users who re-authenticate via OAuth will get a new Supabase session tied to the same email address. The Firebase OAuth state is not portable, but the user's identity is anchored to their email, so as long as your mapping is email-based, OAuth users migrate invisibly on next login.
Step 4 — Rewrite Queries and Data Access
Every Firestore SDK call in your codebase needs to be replaced with a Supabase client call or a direct SQL query. This is typically the largest volume of code changes in the migration. A systematic approach: use your IDE to search for all imports of firebase/firestore and firebase/auth, then work through each file replacing Firebase calls with Supabase equivalents.
Firestore's collection().where().orderBy().limit() chains map cleanly to Supabase's from('table').select().eq().order().limit() syntax. Complex Firestore queries that required data duplication (since Firestore cannot join) can now be expressed as SQL JOINs via Supabase's PostgREST API: from('orders').select('*, users(name, email)').
For very complex queries, use Supabase's database functions (Postgres functions callable via RPC): supabase.rpc('get_user_dashboard_data', { user_id: userId }). This keeps complex SQL out of your client code and makes it easy to optimize later without touching the frontend.
Step 5 — Replace Firebase Storage
Firebase Cloud Storage maps directly to Supabase Storage — both provide an S3-compatible object store with access control. Export your Firebase Storage bucket using the gsutil rsync command to download all files to a local directory, then upload them to a Supabase storage bucket using the Supabase CLI or a bulk upload script using the Supabase JavaScript client.
Update all storage URLs in your database (file URLs stored in Firestore document fields) to point to the new Supabase storage URLs. If you have many such references, a database migration script that does a search-and-replace on the URL prefix is the most efficient approach.
Set up Supabase storage bucket policies to match your Firebase Storage security rules. Supabase storage policies use the same RLS pattern as database policies — users can only access files in folders named with their user ID, or files tagged with their organization ID.
Step 6 — Real-Time Features
If your app uses Firebase's real-time listeners (onSnapshot), you need to replace them with Supabase Realtime subscriptions. Supabase Realtime works differently from Firestore's per-document listeners — instead of listening to a specific document path, you subscribe to changes on a table and filter them client-side or with server-side filters.
A Firestore listener like onSnapshot(doc(db, 'chats', chatId), handler) becomes a Supabase channel subscription: supabase.channel('chat').on('postgres_changes', { event: '*', schema: 'public', table: 'messages', filter: \`chat_id=eq.${chatId}\` }, handler).subscribe(). The pattern is more verbose but more powerful — you can subscribe to any combination of INSERT, UPDATE, and DELETE events and apply server-side filters to reduce unnecessary traffic.
For presence and broadcast features (showing who is currently online, collaborative cursors, etc.), Supabase Realtime channels support both Presence and Broadcast built in. These do not require a database round-trip — they are ephemeral real-time messages through the channel system.
Step 7 — The Zero-Downtime Cutover
The cutover strategy depends on how much read/write traffic your application handles. For most SaaS applications with a well-defined user base, the following approach works well.
At least two weeks before cutover, run your Supabase backend in parallel with Firebase. Any writes to Supabase should also be mirrored to Firebase (using a thin adapter layer or a Cloud Function trigger). This keeps both systems in sync and gives you a rollback option.
On cutover day, put your application into a brief maintenance mode (a static "we're upgrading" page works), perform a final Firestore export, apply any delta records to Supabase that were written in the last 24 hours, then switch your application's environment variables to point to Supabase, deploy, and bring the app back online. The window is typically 15–30 minutes for most applications.
Monitor error rates, authentication failures, and slow query logs in Supabase's dashboard for the first 24 hours. Have your Firebase project on standby — do not delete it for at least 30 days after cutover. Most issues surface within the first few hours and are quickly resolved, but having Firebase available as a reference source prevents disasters.