Common Issues
Connection issues
"Save and Run" button is greyed out
This happens when your data flow has no source connected yet. BigQuery requires at least one source to be fully configured before the data flow can be saved and run.
Go back to the Source step and make sure your source is connected and returning data, then return to the Destination step.
Connection fails after uploading the JSON key file
Double-check that you uploaded the correct file. The key file must be in JSON format and downloaded directly from a GCP Service Account. Common mistakes include uploading a .p12 key, a key from the wrong project, or a file that has been renamed or modified.
If the file looks correct, verify that the Service Account is not disabled in GCP and that the key has not been revoked.
Permission errors
"User does not have bigquery.datasets.create permission" or "Access Denied"
This means the GCP Service Account used for the connection doesn't have sufficient IAM permissions. Your Service Account needs one of the following combinations:
Option 1 — Predefined roles:
bigquery.dataEditorbigquery.jobUser
Option 2 — Individual permissions:
bigquery.tables.createbigquery.tables.updateDatabigquery.jobs.create
After updating the roles in GCP, you must generate a new JSON key file and re-upload it to Coupler.io. Updating roles alone without replacing the key will not fix the error.
Role changes in GCP can take a few minutes to propagate. Wait a moment before generating and uploading the new key.
Data issues
Column names appear as string_field_0, string_field_1, etc.
Coupler.io uses BigQuery's auto schema detection by default. BigQuery generates generic column names like string_field_0 when it can't infer a proper schema. This happens in two situations:
All columns contain only text values — BigQuery needs at least one boolean, date/time, or numeric field to anchor the schema.
The source dataset is empty — BigQuery has no data to sample.
To fix this:
If the dataset was empty, expand your reporting period or remove filters so data is returned, then rerun in Replace mode.
If all columns are text, consider defining the schema manually by disabling Autodetect table schema and entering your column definitions as JSON.
See the schema definition guide for the manual schema format.
"BigQuery doesn't allow values of multiple types in a single column"
This error occurs when a column's data type in BigQuery no longer matches the values Coupler.io is trying to load. This is common when:
A source API starts returning a field as a different type (for example, integers sent as
545652.0instead of545652)A column contained mixed types across different runs and BigQuery locked in the wrong type
This error will disable your data flow if left unresolved. Do not just retry — the underlying schema conflict needs to be fixed first.
To resolve:
Disable Autodetect table schema in your BigQuery destination settings.
Define the schema manually, setting the affected column to the correct type (for example,
FLOATinstead ofINTEGER).If the BigQuery table was auto-created, you may need to delete it and rerun the data flow in Replace mode to recreate it with the correct schema.
A new column added to the source isn't appearing in BigQuery
When you add a new field to your source (for example, a new Airtable column), you also need to refresh the source schema inside Coupler.io. Open the Source step of your data flow, refresh the field list, confirm the new column is selected, then save and rerun.
If you're using a manually defined schema in BigQuery, you'll also need to add the new column to your schema definition before running — otherwise BigQuery will reject the extra field.
Last updated
Was this helpful?
