Common Issues
Connection issues
Connection test fails with "timeout" or "could not connect"
Redshift clusters are often deployed in private VPCs. Coupler.io needs network access to reach your cluster.
Solutions:
Confirm your Redshift cluster is publicly accessible (or has a public endpoint enabled)
Verify the host, port, and database name are correct
Check your Redshift security group allows inbound traffic on port 5439 (or your custom port)
Ask your Coupler.io support team for the IP addresses to whitelist in your security group
If your cluster is in a private VPC, consider using a public endpoint or an SSH tunnel (contact support)
"Invalid username or password" error
The credentials you entered don't match your Redshift user account.
Solutions:
Double-check your Redshift username and password
Verify the user has read access to the database and schema you're querying
Confirm you're using the correct database name
Reset your Redshift password in the AWS console and re-test
Missing data
Data in Coupler.io doesn't match Redshift
If the data pulled into your destination differs from what you see in Redshift, several things could be at play.
Solutions:
Run your SQL query directly in Redshift and verify the results match what Coupler.io exported
Check for NULL values or unexpected data types in your columns
Confirm no other processes are modifying your Redshift tables between the query and export
If using a Custom SQL query with a WHERE clause, verify the filter conditions are correct
Review your Coupler.io data flow logs for any warnings or truncation messages
Column values are NULL or blank in the destination
This can happen if your Redshift data contains NULL values or if there's a data type mismatch.
Solutions:
Check your Redshift table for NULL values using
SELECT COUNT(*) FROM table WHERE column IS NULLUse
COALESCE()in your Custom SQL to replace NULLs with a default value:SELECT COALESCE(column_name, 'N/A') FROM tableVerify column data types in Redshift match what your destination expects
If using formulas in your destination (Google Sheets, Excel), test with non-NULL values first
Not pulling recent data
Your data flow might be querying a stale view or cached results.
Solutions:
Confirm new data is actually being written to your Redshift table (check the
last_modifiedor insertion timestamps)If using a materialized view, verify it's been refreshed recently
Run a manual refresh of your data flow to trigger an immediate pull
Check Coupler.io's data flow logs for any query errors or unexpected row counts
If data is inserted into Redshift after your scheduled refresh time, adjust your schedule or add a slight delay
Permission errors
"Permission denied" when querying a table
Your Redshift user account lacks read access to the table or schema.
Solutions:
Verify your Redshift user has
SELECTprivileges on the table:GRANT SELECT ON table_name TO user_name;Confirm the schema is accessible:
GRANT USAGE ON SCHEMA schema_name TO user_name;If the table is in a different schema, include the schema name in your query:
SELECT * FROM schema_name.table_nameContact your Redshift administrator to grant the necessary permissions
Data discrepancies
Data in preview doesn't match exported data
The preview in Coupler.io might show a subset of rows (often the first 100), while your full export contains all rows.
Solutions:
This is expected behavior. The preview is a sample; the export includes all rows matching your query
To verify, check the row count in your destination and compare it to the Redshift table row count
If row counts differ significantly, review your Custom SQL query for unintended filters or joins
Numeric precision lost or decimals truncated
Data types in your destination (Google Sheets, Excel) may not support the full precision of Redshift's numeric columns.
Solutions:
Use
CAST()or::in your Custom SQL to convert high-precision numbers to text if needed:SELECT amount::VARCHAR FROM ordersCheck your destination's column format settings (e.g., decimal places in Excel)
For very large or precise numbers, export as text to preserve full precision
Rate limits
Data flow times out or runs slowly
Redshift queries can time out if they're complex, access very large tables, or run during peak cluster usage.
Solutions:
Add a
LIMITclause to your Custom SQL to test with a smaller dataset firstAdd filters or WHERE clauses to reduce the number of rows returned
Check your Redshift cluster's CPU and memory usage; consider scaling up if usage is high
Run the data flow during off-peak hours
If your query involves large joins or aggregations, test it directly in Redshift first to measure execution time
For tables with millions of rows, consider using Redshift's
SORTKEYandDISTKEYsettings to optimize query performance
10 million cell limit: Coupler.io has a limit of 10 million cells per data flow. If your Redshift query returns more rows or columns than this threshold allows, consider filtering your data or splitting it into multiple data flows.
Last updated
Was this helpful?
