Data Overview
PostgreSQL data flows pull data directly from your database using either table selection or custom SQL queries. The data structure depends on what you're importing.
Import modes
Table or view
Import all or most data from a single table or materialized view.
Pull the customers table as-is.
Custom SQL
Write a query to filter, join, aggregate, or transform data.
SELECT customer_id, COUNT(*) as order_count FROM orders GROUP BY customer_id
Available columns and data types
When you import from PostgreSQL, all columns and their data types come through as-is. Common PostgreSQL data types include:
Numeric types
INTEGER
Whole numbers
BIGINT
Large whole numbers
DECIMAL / NUMERIC
Precise decimal numbers
FLOAT / DOUBLE PRECISION
Floating-point numbers
Text types
VARCHAR / TEXT
Variable-length text
CHAR
Fixed-length text
Date and time types
DATE
Calendar date
TIME
Time of day
TIMESTAMP / TIMESTAMPTZ
Date and time (with or without timezone)
Other types
BOOLEAN
True/False values
UUID
Unique identifiers
JSON / JSONB
JSON objects (imported as text)
ARRAY
Arrays (imported as text)
Common query patterns
Here are useful SQL queries you can paste into the Custom SQL field:
Filter by date range:
Aggregate data:
Join multiple tables:
Use cases by role
Analysts use PostgreSQL data flows to build custom datasets for analysis. Write SQL queries to join tables, aggregate metrics, and filter to specific segments. Export to BigQuery for large-scale analysis or to Google Sheets for quick exploration. Use the Join transformation to combine PostgreSQL data with data from other sources (e.g., marketing platforms).
Finance teams pull revenue, expense, and transaction data from PostgreSQL to Google Sheets or Excel for reconciliation and reporting. Use Custom SQL to aggregate by department, month, or cost center. Append data from multiple accounts or time periods using the Append transformation, then send to Looker Studio for executive dashboards.
Product teams query user behavior, feature adoption, and event data from PostgreSQL. Write SQL to filter by cohort, feature flag, or date range. Send aggregated metrics to Looker Studio or Claude (via Coupler.io) to generate insights and identify trends automatically.
Ops teams use PostgreSQL data flows to monitor application health, sync customer or employee data, and automate status reporting. Pull from logs, events, or operational tables and send to Google Sheets for team dashboards. Use Custom SQL to filter errors, slow queries, or specific time windows.
Platform-specific notes
IP whitelisting required — Coupler.io connects from fixed IP addresses. You must whitelist these in your PostgreSQL firewall or security group (AWS RDS, Azure Database, etc.).
Read-only access — Your database user only needs SELECT permissions; no writes or schema changes.
Large result sets — If your query returns millions of rows, consider filtering by date, aggregating, or using LIMIT to avoid timeouts.
Connection pooling — PostgreSQL has a connection limit per role. If you run many data flows, you may hit this limit; consider using a connection pooler or increasing max_connections.
Custom SQL performance — Complex queries with multiple joins or aggregations may take longer. Test in psql or your query tool first.
JSON/ARRAY types — These are imported as text strings. Parse them in your destination (Google Sheets formulas, BigQuery JSON functions, etc.) if needed.
Last updated
Was this helpful?
