Portmux is a Postgres to Snowflake migration service that backfills your historical OLTP data and stands up a continuous CDC pipeline so your warehouse stays current without your application team owning a sync cron job.
Postgres-to-Snowflake is a structural translation, not a copy. Postgres types (JSONB, ARRAY, ENUM, TIMESTAMPTZ, GEOGRAPHY) need explicit Snowflake equivalents. Sequences and SERIAL columns become AUTOINCREMENT or sequence references. Foreign keys are advisory in Snowflake. We make every translation decision deliberately, document it, and prove correctness with row-level reconciliation against the source.
Postgres schemas mapped to Snowflake schemas; tables created with translated types and clustering keys for high-cardinality columns.
JSONB → VARIANT, ARRAY → ARRAY, ENUM → VARCHAR with check constraints, TIMESTAMPTZ → TIMESTAMP_TZ, GEOGRAPHY → GEOGRAPHY (or VARCHAR fallback).
Primary keys preserved as Snowflake constraints (informational, not enforced). Foreign keys imported as constraint metadata for BI tooling.
Postgres sequences and SERIAL columns mapped to Snowflake AUTOINCREMENT or explicit SEQUENCE objects with the next-value carried forward.
Postgres B-tree, GIN, and GiST indexes do not exist in Snowflake. Replaced with clustering keys, search optimization, or materialized views as appropriate.
Postgres materialized views recreated as Snowflake materialized views (Enterprise+) or scheduled tasks producing target tables.
PL/pgSQL functions translated to Snowflake JavaScript or SQL stored procedures. Complex procedures flagged for application-tier rewrite.
All views recreated in Snowflake with definition translated; any Postgres-specific functions in view bodies replaced with Snowflake equivalents.
Postgres roles and grants analyzed and re-implemented as Snowflake roles with DATABASE/SCHEMA/TABLE grants and warehouse access.
Postgres declarative partitions consolidated into single Snowflake tables with clustering keys, partition routing handled by Snowflake automatically.
Postgres triggers don't have a Snowflake equivalent. Replaced with stream-and-task pipelines or application-tier logic; documented per trigger.
Postgres extensions (pg_trgm, postgis, uuid-ossp, hstore) replaced with Snowflake-native equivalents or staged through external functions.
We connect to Postgres via a read replica using a logical replication slot for CDC. The pg_dump --schema-only export drives the schema translation pass. Within 72 hours you see a full type-mapping document, every Postgres type in your schema mapped to its Snowflake counterpart with the rationale for each non-trivial choice.
Schema translation is reviewed table-by-table with your data team. We agree on clustering keys (typically by date and high-cardinality dimensions), micro-partitioning strategy, and which application tables warrant materialized views in Snowflake. Sample data loaded into a Snowflake sandbox for type validation before the full backfill.
Historical backfill runs as a one-time COPY INTO from staged Parquet exports of your Postgres tables, with row counts and SUM/MIN/MAX checksums per column reconciled to the source. CDC pipeline (via Debezium → Snowpipe or Fivetran-style approach) starts mid-backfill so the warehouse catches up to live within minutes of cutover.
Every migration has its own gotchas. Here's what we plan for on this specific path.
Postgres JSONB maps to Snowflake VARIANT, but the query syntax differs. Application queries that use jsonb_path_query or @> operators need rewriting against VARIANT functions. We deliver a query-translation cheat sheet for your BI team and rewrite saved queries in scope.
Snowflake has FK constraints but doesn't enforce them at write time. Bad data loaded via COPY won't be rejected. We add validation queries to the load pipeline that flag orphan rows and route them to a quarantine table for review before they pollute the warehouse.
Postgres triggers (audit logging, derived columns, row-level access checks) need to be re-implemented as Snowflake streams + tasks, application-tier logic, or column masking policies. We catalog every trigger in your source and propose the right Snowflake pattern per trigger.
Postgres SERIAL columns increment on every insert. If both Postgres and Snowflake are accepting writes during the rehearsal phase (you shouldn't be, but it happens), sequences drift. We coordinate the freeze-and-final-delta sequence carefully and add a sentinel column to detect any drift before sign-off.
Single-system migrations like Postgres to Snowflake run as Track A engagements: one source, one destination, up to 1M records, 4–6 weeks. Final price depends on object volume, custom field count, and integrations, scoped on a 20-minute call before any commitment. See full pricing →
Tell us what's in the source, where it's going, SaaS or custom, and when you need to be live. You'll walk away with a scoped quote, a named engineer, and a go-live date.