Portmux
Data Warehouse → Data Warehouse MIGRATION

Redshift to BigQuery
migration service.

Portmux is a Redshift to BigQuery migration service that moves your schemas, tables, UDFs, scheduled queries, and IAM model into BigQuery with type-safe translation and row-level reconciliation against the source.

Redshift-to-BigQuery is usually a cost or operational move, BigQuery's serverless model and per-query pricing fits some workloads better than Redshift's provisioned cluster model. The data move is mechanical (UNLOAD to S3, transfer to GCS, load via BigQuery Data Transfer Service). The work is in the SQL: Redshift's PostgreSQL-flavored SQL doesn't always run in BigQuery's GoogleSQL, and DISTSTYLE / SORTKEY / DISTKEY have no BigQuery equivalents.

FIG. REDSHIFT → BIGQUERY
SOURCE
Redshift
Data Warehouse
DESTINATION
BigQuery
Data Warehouse
4–6
Weeks typical
0ms
Cutover downtime
$12k
Starting fee
§ WHAT WE MIGRATE

Every object, every field.
From Redshift, into BigQuery.

Both Redshift and BigQuery are columnar warehouses, but their physical storage models differ. Redshift uses distribution and sort keys; BigQuery uses partitioning and clustering. We re-design table physical layout for BigQuery best practices during mapping.
Datasets & Schemas

Redshift schemas mapped to BigQuery datasets; cross-schema dependencies preserved with explicit dataset references in views.

Tables & Data Types

Tables created in BigQuery with translated types: VARCHAR → STRING, TIMESTAMPTZ → TIMESTAMP, NUMERIC(p,s) → NUMERIC, SUPER → STRUCT/JSON; partitioning and clustering set per usage analysis.

Distribution & Sort Keys

Redshift DISTSTYLE/DISTKEY/SORTKEY analyzed and translated to BigQuery PARTITION BY (typically date) and CLUSTER BY (high-cardinality columns) for query performance.

Materialized Views

Redshift Materialized Views recreated as BigQuery Materialized Views (with the same query) or as scheduled queries producing target tables.

Stored Procedures

Redshift PL/pgSQL stored procedures translated to BigQuery scripting (BEGIN/END blocks); procedure calls in dbt or external tools updated with new syntax.

UDFs

SQL UDFs and Python UDFs (Redshift-specific) translated to BigQuery SQL UDFs or JavaScript UDFs; some Python UDFs require redesign as BigQuery doesn't support Python.

External Tables (Spectrum)

Redshift Spectrum external tables (S3-backed) recreated as BigQuery External Tables on Cloud Storage with the same Parquet/ORC partitioning and schema.

Views

All views recreated in BigQuery with definition translated; Redshift-specific functions in view bodies replaced with BigQuery equivalents.

Scheduled Queries

Redshift scheduled queries and Lambda-triggered SQL re-implemented as BigQuery Scheduled Queries with the same cron schedule and target table.

IAM & Permissions

Redshift users, groups, and grants translated to BigQuery Cloud IAM principals with dataset/table-level permissions and row-level security policies preserved.

Workload Management

Redshift WLM queues mapped to BigQuery slot reservations and assignments (Editions / Reservations) for predictable query performance.

AWS Glue Catalog Integrations

AWS Glue Data Catalog references (used by Spectrum) updated to BigQuery External Tables on Cloud Storage; Iceberg tables linked via BigLake.

§ HOW THIS MIGRATION RUNS

Three steps. One go-live date.

01
CONNECT

Plug into Redshift.

We connect to Redshift via JDBC with a read-only role and to BigQuery via a Service Account with BigQuery Admin scope on target datasets. The information_schema export drives the schema translation pass. Within 72 hours you see a complete type mapping document, query log analysis (which tables/columns are hot), and a recommended partitioning + clustering strategy per table.

02
MAP

Map to BigQuery.

Schema translation reviewed table-by-table with your data team. Distribution/sort keys analyzed against BigQuery partition/cluster best practices. Materialized views, UDFs, and stored procedures are translated with explicit per-object documentation. dbt projects (if you use dbt) get a separate translation pass with a list of jinja and SQL changes.

03
CUTOVER

Flip the connection.

Historical backfill via UNLOAD from Redshift to S3, transfer S3 → GCS via Storage Transfer Service, load GCS → BigQuery via BigQuery Load Jobs. Row counts and SUM/MIN/MAX checksums reconciled per table before sign-off. CDC pipeline (via Datastream or third-party CDC) optional for ongoing replication if you need parallel-running.

§ WHERE IT GETS HARD

Redshift to BigQuery isn't a button.

Every migration has its own gotchas. Here's what we plan for on this specific path.

● 01

SQL dialect differences

Redshift uses PostgreSQL-flavored SQL; BigQuery uses GoogleSQL. Common gotchas: date functions (DATE_TRUNC vs DATETIME_TRUNC), string functions (POSITION, REGEXP_REPLACE), window functions with frame clauses, ::TYPE casts, and CTE recursion semantics. We deliver a query-translation cheat sheet and rewrite saved queries in scope.

● 02

No DISTSTYLE/SORTKEY equivalent

Redshift's DISTSTYLE and SORTKEY drive physical placement and join performance. BigQuery uses partitioning (typically by date) and clustering (up to 4 columns). We analyze your query log to identify the highest-impact partition + cluster choices per table; some Redshift-optimized join strategies need redesign for BigQuery.

● 03

Python UDF translation

Redshift supports Python UDFs (with NumPy/Pandas in the runtime). BigQuery SQL UDFs are SQL-only; BigQuery JavaScript UDFs run JavaScript only, no Python. Python UDFs need redesign: simple cases become SQL or JavaScript UDFs; complex Python (ML inference, complex string parsing) moves to a Cloud Function called from BigQuery via a Remote Function.

● 04

Cost model change

Redshift uses provisioned cluster pricing (you pay per hour for capacity). BigQuery uses on-demand pricing (per-byte-scanned) by default, or capacity-based pricing (Editions / Reservations) for predictable workloads. The migration includes a cost projection based on your query log so you can pick the right BigQuery pricing model from day one.

§ STARTING PRICE

Redshift to BigQuery from $12K.

Single-system migrations like Redshift to BigQuery run as Track A engagements: one source, one destination, up to 1M records, 4–6 weeks. Final price depends on object volume, custom field count, and integrations, scoped on a 20-minute call before any commitment. See full pricing →

TRACK A
FROM$12K
4–6 weeks · 1 source → 1 destination · up to 1M records
Get a quote
§ QUESTIONS

Redshift → BigQuery, asked.

How long does a Redshift to BigQuery migration take? +
Standard Redshift-to-BigQuery migrations run 6–10 weeks. Schema translation and initial backfill take 2–3 weeks; query rewriting (especially if your team uses dbt) is the bulk of the remaining time. Warehouses with heavy Python UDF or stored procedure usage take longer because each one needs redesign.
Will my BI tools (Tableau, Looker, Mode) keep working? +
BI tools point to BigQuery via the BigQuery JDBC/ODBC driver after migration. Most dashboards work after the connection swap, since Looker, Tableau, and Mode all support BigQuery natively. Custom SQL queries embedded in dashboards may need translation, we identify these in the query log analysis and rewrite the affected queries during the engagement.
Can I keep Redshift running during the migration? +
Yes. The migration runs without disrupting Redshift, queries continue executing against Redshift while we backfill BigQuery. Once BigQuery is reconciled, optional CDC keeps both warehouses in sync until cutover. After cutover, BI tools point to BigQuery and Redshift stays as a read-only safety net for 30–60 days.
Do you migrate dbt projects? +
Yes, on request. dbt projects are translated for BigQuery, dbt-bigquery adapter swap, jinja macro updates for BigQuery-specific syntax, refactoring incremental models that use Redshift-specific UPSERT patterns, updating snapshots for BigQuery time-travel. Smaller dbt projects (under 100 models) translate quickly; larger projects are scoped as a separate workstream.
What about Redshift Spectrum tables? +
Spectrum external tables become BigQuery External Tables (or BigLake tables for Iceberg/Delta formats) with the underlying data either staying in S3 (cross-cloud query via BigQuery Omni) or being copied to Cloud Storage. We pick the right pattern based on data volume and access patterns.
NEXT CUTOVER

Book a 20-minute
scoping call.

Tell us what's in the source, where it's going, SaaS or custom, and when you need to be live. You'll walk away with a scoped quote, a named engineer, and a go-live date.

§ RELATED MIGRATIONS