Common Issues

Connection issues

chevron-rightConnection test fails with "timeout" or "could not connect"hashtag

Redshift clusters are often deployed in private VPCs. Coupler.io needs network access to reach your cluster.

Solutions:

  • Confirm your Redshift cluster is publicly accessible (or has a public endpoint enabled)

  • Verify the host, port, and database name are correct

  • Check your Redshift security group allows inbound traffic on port 5439 (or your custom port)

  • Ask your Coupler.io support team for the IP addresses to whitelist in your security group

  • If your cluster is in a private VPC, consider using a public endpoint or an SSH tunnel (contact support)

chevron-right"Invalid username or password" errorhashtag

The credentials you entered don't match your Redshift user account.

Solutions:

  • Double-check your Redshift username and password

  • Verify the user has read access to the database and schema you're querying

  • Confirm you're using the correct database name

  • Reset your Redshift password in the AWS console and re-test

Missing data

chevron-rightData in Coupler.io doesn't match Redshifthashtag

If the data pulled into your destination differs from what you see in Redshift, several things could be at play.

Solutions:

  • Run your SQL query directly in Redshift and verify the results match what Coupler.io exported

  • Check for NULL values or unexpected data types in your columns

  • Confirm no other processes are modifying your Redshift tables between the query and export

  • If using a Custom SQL query with a WHERE clause, verify the filter conditions are correct

  • Review your Coupler.io data flow logs for any warnings or truncation messages

chevron-rightColumn values are NULL or blank in the destinationhashtag

This can happen if your Redshift data contains NULL values or if there's a data type mismatch.

Solutions:

  • Check your Redshift table for NULL values using SELECT COUNT(*) FROM table WHERE column IS NULL

  • Use COALESCE() in your Custom SQL to replace NULLs with a default value: SELECT COALESCE(column_name, 'N/A') FROM table

  • Verify column data types in Redshift match what your destination expects

  • If using formulas in your destination (Google Sheets, Excel), test with non-NULL values first

chevron-rightNot pulling recent datahashtag

Your data flow might be querying a stale view or cached results.

Solutions:

  • Confirm new data is actually being written to your Redshift table (check the last_modified or insertion timestamps)

  • If using a materialized view, verify it's been refreshed recently

  • Run a manual refresh of your data flow to trigger an immediate pull

  • Check Coupler.io's data flow logs for any query errors or unexpected row counts

  • If data is inserted into Redshift after your scheduled refresh time, adjust your schedule or add a slight delay

Permission errors

chevron-right"Permission denied" when querying a tablehashtag

Your Redshift user account lacks read access to the table or schema.

Solutions:

  • Verify your Redshift user has SELECT privileges on the table: GRANT SELECT ON table_name TO user_name;

  • Confirm the schema is accessible: GRANT USAGE ON SCHEMA schema_name TO user_name;

  • If the table is in a different schema, include the schema name in your query: SELECT * FROM schema_name.table_name

  • Contact your Redshift administrator to grant the necessary permissions

Data discrepancies

chevron-rightData in preview doesn't match exported datahashtag

The preview in Coupler.io might show a subset of rows (often the first 100), while your full export contains all rows.

Solutions:

  • This is expected behavior. The preview is a sample; the export includes all rows matching your query

  • To verify, check the row count in your destination and compare it to the Redshift table row count

  • If row counts differ significantly, review your Custom SQL query for unintended filters or joins

chevron-rightNumeric precision lost or decimals truncatedhashtag

Data types in your destination (Google Sheets, Excel) may not support the full precision of Redshift's numeric columns.

Solutions:

  • Use CAST() or :: in your Custom SQL to convert high-precision numbers to text if needed: SELECT amount::VARCHAR FROM orders

  • Check your destination's column format settings (e.g., decimal places in Excel)

  • For very large or precise numbers, export as text to preserve full precision

Rate limits

chevron-rightData flow times out or runs slowlyhashtag

Redshift queries can time out if they're complex, access very large tables, or run during peak cluster usage.

Solutions:

  • Add a LIMIT clause to your Custom SQL to test with a smaller dataset first

  • Add filters or WHERE clauses to reduce the number of rows returned

  • Check your Redshift cluster's CPU and memory usage; consider scaling up if usage is high

  • Run the data flow during off-peak hours

  • If your query involves large joins or aggregations, test it directly in Redshift first to measure execution time

  • For tables with millions of rows, consider using Redshift's SORTKEY and DISTKEY settings to optimize query performance

triangle-exclamation

Last updated

Was this helpful?