# Common Issues

## Connection issues

<details>

<summary>Connection fails after entering credentials</summary>

The most common cause is network access. If your Redshift cluster is inside a VPC or behind a firewall, Coupler.io's requests will be blocked unless you allowlist its IP addresses:

* `34.123.243.115`
* `34.170.96.92`

Add these to your cluster's security group inbound rules (port 5439, TCP) and try connecting again.

{% hint style="warning" %}
If your cluster is paused, the connection will also fail. Resume the cluster in the AWS console before attempting to connect.
{% endhint %}

</details>

<details>

<summary>Wrong host or port causes timeout</summary>

Double-check the endpoint in your AWS console under **Clusters → your cluster → General information → Endpoint**. The default port is `5439` — if your cluster was configured with a custom port, use that value instead. Do not include the database name in the host field.

</details>

## Data issues

<details>

<summary>Append mode breaks after the source adds new columns</summary>

This is a known behavior. When a data source (such as Google Ads) introduces new columns, the existing Redshift table schema won't automatically update to accommodate them. Coupler.io will fail to insert rows that contain columns the table doesn't recognize.

To fix this, you have two options:

1. **Manually add the new columns** to the Redshift table using `ALTER TABLE` before running the data flow again.
2. **Switch to Replace mode temporarily**, run the data flow once to recreate the table with the new schema, then switch back to Append mode.

{% hint style="danger" %}
Option 2 will delete all existing rows in the table. If you need historical data preserved, use option 1 or export your data first.
{% endhint %}

</details>

<details>

<summary>Data types are wrong or cause insert errors</summary>

Coupler.io detects column types from the source data and enforces them when creating the table. If the source returns inconsistent types (e.g., a column that sometimes contains numbers and sometimes text), the load may fail or cast incorrectly.

Check the raw data in the **Preview** step of your data flow. If a column has mixed types, consider using a transformation to normalize it before it reaches Redshift.

</details>

<details>

<summary>Table or schema is not being created automatically</summary>

Coupler.io will only create schemas and tables if the database user has the necessary privileges. Ensure your user has `CREATE` privileges at both the database level (for schema creation) and within the target schema (for table creation). Run the following in Redshift to verify or grant permissions:

```sql
GRANT CREATE ON DATABASE your_database TO your_user;
GRANT CREATE ON SCHEMA your_schema TO your_user;
```

</details>

## Permission errors

<details>

<summary>"Permission denied" errors during data load</summary>

This usually means the database user has read-only access or is missing INSERT permissions on the target table. Grant the required permissions:

```sql
GRANT INSERT ON TABLE your_schema.your_table TO your_user;
```

For Replace mode, the user also needs `DROP` and `CREATE` permissions on the table because Replace recreates the table on every run.

</details>

<details>

<summary>User can connect but can't see or write to a specific schema</summary>

In Redshift, schema access is separate from database access. The user may be authenticated but not granted usage on a specific schema. Run:

```sql
GRANT USAGE ON SCHEMA your_schema TO your_user;
GRANT CREATE ON SCHEMA your_schema TO your_user;
```

</details>
