Portmux
OLTP Database → Data Warehouse MIGRATION

Postgres to Snowflake
migration service.

Portmux is a Postgres to Snowflake migration service that backfills your historical OLTP data and stands up a continuous CDC pipeline so your warehouse stays current without your application team owning a sync cron job.

Postgres-to-Snowflake is a structural translation, not a copy. Postgres types (JSONB, ARRAY, ENUM, TIMESTAMPTZ, GEOGRAPHY) need explicit Snowflake equivalents. Sequences and SERIAL columns become AUTOINCREMENT or sequence references. Foreign keys are advisory in Snowflake. We make every translation decision deliberately, document it, and prove correctness with row-level reconciliation against the source.

FIG. POSTGRES → SNOWFLAKE
SOURCE
Postgres
OLTP Database
DESTINATION
Snowflake
Data Warehouse
4–6
Weeks typical
0ms
Cutover downtime
$12k
Starting fee
§ WHAT WE MIGRATE

Every object, every field.
From Postgres, into Snowflake.

Snowflake is a columnar OLAP warehouse, not a clone of Postgres. The migration is two parts: backfill (one-time historical load with type translation) and CDC (continuous replication for ongoing sync). We design both together so cutover is a clean swap, not a freeze-and-load.
Tables & Schemas

Postgres schemas mapped to Snowflake schemas; tables created with translated types and clustering keys for high-cardinality columns.

Data Types

JSONB → VARIANT, ARRAY → ARRAY, ENUM → VARCHAR with check constraints, TIMESTAMPTZ → TIMESTAMP_TZ, GEOGRAPHY → GEOGRAPHY (or VARCHAR fallback).

Primary & Foreign Keys

Primary keys preserved as Snowflake constraints (informational, not enforced). Foreign keys imported as constraint metadata for BI tooling.

Sequences

Postgres sequences and SERIAL columns mapped to Snowflake AUTOINCREMENT or explicit SEQUENCE objects with the next-value carried forward.

Indexes

Postgres B-tree, GIN, and GiST indexes do not exist in Snowflake. Replaced with clustering keys, search optimization, or materialized views as appropriate.

Materialized Views

Postgres materialized views recreated as Snowflake materialized views (Enterprise+) or scheduled tasks producing target tables.

Stored Procedures & Functions

PL/pgSQL functions translated to Snowflake JavaScript or SQL stored procedures. Complex procedures flagged for application-tier rewrite.

Views

All views recreated in Snowflake with definition translated; any Postgres-specific functions in view bodies replaced with Snowflake equivalents.

Roles & Permissions

Postgres roles and grants analyzed and re-implemented as Snowflake roles with DATABASE/SCHEMA/TABLE grants and warehouse access.

Partitions

Postgres declarative partitions consolidated into single Snowflake tables with clustering keys, partition routing handled by Snowflake automatically.

Triggers

Postgres triggers don't have a Snowflake equivalent. Replaced with stream-and-task pipelines or application-tier logic; documented per trigger.

Extensions

Postgres extensions (pg_trgm, postgis, uuid-ossp, hstore) replaced with Snowflake-native equivalents or staged through external functions.

§ HOW THIS MIGRATION RUNS

Three steps. One go-live date.

01
CONNECT

Plug into Postgres.

We connect to Postgres via a read replica using a logical replication slot for CDC. The pg_dump --schema-only export drives the schema translation pass. Within 72 hours you see a full type-mapping document, every Postgres type in your schema mapped to its Snowflake counterpart with the rationale for each non-trivial choice.

02
MAP

Map to Snowflake.

Schema translation is reviewed table-by-table with your data team. We agree on clustering keys (typically by date and high-cardinality dimensions), micro-partitioning strategy, and which application tables warrant materialized views in Snowflake. Sample data loaded into a Snowflake sandbox for type validation before the full backfill.

03
CUTOVER

Flip the connection.

Historical backfill runs as a one-time COPY INTO from staged Parquet exports of your Postgres tables, with row counts and SUM/MIN/MAX checksums per column reconciled to the source. CDC pipeline (via Debezium → Snowpipe or Fivetran-style approach) starts mid-backfill so the warehouse catches up to live within minutes of cutover.

§ WHERE IT GETS HARD

Postgres to Snowflake isn't a button.

Every migration has its own gotchas. Here's what we plan for on this specific path.

● 01

JSONB and array column translation

Postgres JSONB maps to Snowflake VARIANT, but the query syntax differs. Application queries that use jsonb_path_query or @> operators need rewriting against VARIANT functions. We deliver a query-translation cheat sheet for your BI team and rewrite saved queries in scope.

● 02

No row-level enforcement of foreign keys

Snowflake has FK constraints but doesn't enforce them at write time. Bad data loaded via COPY won't be rejected. We add validation queries to the load pipeline that flag orphan rows and route them to a quarantine table for review before they pollute the warehouse.

● 03

Triggers don't exist in Snowflake

Postgres triggers (audit logging, derived columns, row-level access checks) need to be re-implemented as Snowflake streams + tasks, application-tier logic, or column masking policies. We catalog every trigger in your source and propose the right Snowflake pattern per trigger.

● 04

Sequence drift during cutover

Postgres SERIAL columns increment on every insert. If both Postgres and Snowflake are accepting writes during the rehearsal phase (you shouldn't be, but it happens), sequences drift. We coordinate the freeze-and-final-delta sequence carefully and add a sentinel column to detect any drift before sign-off.

§ STARTING PRICE

Postgres to Snowflake from $12K.

Single-system migrations like Postgres to Snowflake run as Track A engagements: one source, one destination, up to 1M records, 4–6 weeks. Final price depends on object volume, custom field count, and integrations, scoped on a 20-minute call before any commitment. See full pricing →

TRACK A
FROM$12K
4–6 weeks · 1 source → 1 destination · up to 1M records
Get a quote
§ QUESTIONS

Postgres → Snowflake, asked.

How long does a Postgres to Snowflake migration take? +
Schema translation and initial backfill for a typical operational Postgres database (under 1TB, fewer than 200 tables) runs 4–6 weeks. Backfill itself is bandwidth-bound, expect 1–2 days for the data load. The bulk of the time is type translation, query rewriting for your BI tools, and CDC pipeline standup.
Will my application keep writing to Postgres after migration? +
Yes. Snowflake is a warehouse, not an OLTP replacement. Postgres stays as your application's transactional store. The CDC pipeline replicates Postgres changes to Snowflake continuously so analytics, BI, and reporting see warehouse-fresh data within minutes of an OLTP write.
What about Postgres-specific types like JSONB and PostGIS? +
JSONB maps to Snowflake VARIANT and most queries port with minor syntax changes (we deliver the cheat sheet). PostGIS GEOGRAPHY maps to Snowflake GEOGRAPHY (with some function gaps); for advanced spatial queries we recommend keeping a Postgres replica with PostGIS for those workloads, with the warehouse holding non-spatial analytics.
Do you set up the CDC pipeline, or just the historical backfill? +
Both. Historical backfill is a one-time COPY INTO from Parquet exports. CDC is a continuous pipeline using Postgres logical replication (via Debezium, AWS DMS, or Fivetran-style connector) into Snowpipe or Snowflake Streaming. We deliver both with monitoring and alerting wired to your incident channel.
How do you reconcile that the warehouse matches the source? +
Three checks per table: row count, SUM (or COUNT for non-numeric) of every column, and MIN/MAX of every datetime column. These run automatically post-backfill, with deltas flagged before sign-off. For high-stakes tables (financial transactions, audit logs), we also do row-level hash comparisons on a sampled subset.
NEXT CUTOVER

Book a 20-minute
scoping call.

Tell us what's in the source, where it's going, SaaS or custom, and when you need to be live. You'll walk away with a scoped quote, a named engineer, and a go-live date.

§ RELATED MIGRATIONS