# Data Overview

PostgreSQL data flows pull data directly from your database using either table selection or custom SQL queries. The data structure depends on what you're importing.

## Import modes

| Mode              | Use case                                                          | Example                                                                        |
| ----------------- | ----------------------------------------------------------------- | ------------------------------------------------------------------------------ |
| **Table or view** | Import all or most data from a single table or materialized view. | Pull the `customers` table as-is.                                              |
| **Custom SQL**    | Write a query to filter, join, aggregate, or transform data.      | `SELECT customer_id, COUNT(*) as order_count FROM orders GROUP BY customer_id` |

## Available columns and data types

When you import from PostgreSQL, all columns and their data types come through as-is. Common PostgreSQL data types include:

#### Numeric types

| Type                         | Description             |
| ---------------------------- | ----------------------- |
| `INTEGER`                    | Whole numbers           |
| `BIGINT`                     | Large whole numbers     |
| `DECIMAL` / `NUMERIC`        | Precise decimal numbers |
| `FLOAT` / `DOUBLE PRECISION` | Floating-point numbers  |

#### Text types

| Type               | Description          |
| ------------------ | -------------------- |
| `VARCHAR` / `TEXT` | Variable-length text |
| `CHAR`             | Fixed-length text    |

#### Date and time types

| Type                        | Description                              |
| --------------------------- | ---------------------------------------- |
| `DATE`                      | Calendar date                            |
| `TIME`                      | Time of day                              |
| `TIMESTAMP` / `TIMESTAMPTZ` | Date and time (with or without timezone) |

#### Other types

| Type             | Description                     |
| ---------------- | ------------------------------- |
| `BOOLEAN`        | True/False values               |
| `UUID`           | Unique identifiers              |
| `JSON` / `JSONB` | JSON objects (imported as text) |
| `ARRAY`          | Arrays (imported as text)       |

## Common query patterns

Here are useful SQL queries you can paste into the Custom SQL field:

**Filter by date range:**

```sql
SELECT * FROM orders 
WHERE created_at >= '2024-01-01' AND created_at < '2024-02-01'
```

**Aggregate data:**

```sql
SELECT user_id, COUNT(*) as order_count, SUM(total) as revenue 
FROM orders 
GROUP BY user_id
```

**Join multiple tables:**

```sql
SELECT c.name, o.order_id, o.total 
FROM customers c 
JOIN orders o ON c.id = o.customer_id 
WHERE o.created_at >= NOW() - INTERVAL '30 days'
```

## Use cases by role

{% tabs %}
{% tab title="Analysts" %}
Analysts use PostgreSQL data flows to build custom datasets for analysis. Write SQL queries to join tables, aggregate metrics, and filter to specific segments. Export to BigQuery for large-scale analysis or to Google Sheets for quick exploration. Use the Join transformation to combine PostgreSQL data with data from other sources (e.g., marketing platforms).
{% endtab %}

{% tab title="Finance teams" %}
Finance teams pull revenue, expense, and transaction data from PostgreSQL to Google Sheets or Excel for reconciliation and reporting. Use Custom SQL to aggregate by department, month, or cost center. Append data from multiple accounts or time periods using the Append transformation, then send to Looker Studio for executive dashboards.
{% endtab %}

{% tab title="Product managers" %}
Product teams query user behavior, feature adoption, and event data from PostgreSQL. Write SQL to filter by cohort, feature flag, or date range. Send aggregated metrics to Looker Studio or Claude (via Coupler.io) to generate insights and identify trends automatically.
{% endtab %}

{% tab title="Operations" %}
Ops teams use PostgreSQL data flows to monitor application health, sync customer or employee data, and automate status reporting. Pull from logs, events, or operational tables and send to Google Sheets for team dashboards. Use Custom SQL to filter errors, slow queries, or specific time windows.
{% endtab %}
{% endtabs %}

## Platform-specific notes

* **IP whitelisting required** — Coupler.io connects from fixed IP addresses. You must whitelist these in your PostgreSQL firewall or security group (AWS RDS, Azure Database, etc.).
* **Read-only access** — Your database user only needs SELECT permissions; no writes or schema changes.
* **Large result sets** — If your query returns millions of rows, consider filtering by date, aggregating, or using LIMIT to avoid timeouts.
* **Connection pooling** — PostgreSQL has a connection limit per role. If you run many data flows, you may hit this limit; consider using a connection pooler or increasing max\_connections.
* **Custom SQL performance** — Complex queries with multiple joins or aggregations may take longer. Test in psql or your query tool first.
* **JSON/ARRAY types** — These are imported as text strings. Parse them in your destination (Google Sheets formulas, BigQuery JSON functions, etc.) if needed.
