Common Issues

Connection issues

chevron-right"Save and Run" button is greyed outhashtag

This happens when your data flow has no source connected yet. BigQuery requires at least one source to be fully configured before the data flow can be saved and run.

Go back to the Source step and make sure your source is connected and returning data, then return to the Destination step.

chevron-rightConnection fails after uploading the JSON key filehashtag

Double-check that you uploaded the correct file. The key file must be in JSON format and downloaded directly from a GCP Service Account. Common mistakes include uploading a .p12 key, a key from the wrong project, or a file that has been renamed or modified.

If the file looks correct, verify that the Service Account is not disabled in GCP and that the key has not been revoked.

Permission errors

chevron-right"User does not have bigquery.datasets.create permission" or "Access Denied"hashtag

This means the GCP Service Account used for the connection doesn't have sufficient IAM permissions. Your Service Account needs one of the following combinations:

Option 1 — Predefined roles:

  • bigquery.dataEditor

  • bigquery.jobUser

Option 2 — Individual permissions:

  • bigquery.tables.create

  • bigquery.tables.updateData

  • bigquery.jobs.create

After updating the roles in GCP, you must generate a new JSON key file and re-upload it to Coupler.io. Updating roles alone without replacing the key will not fix the error.

circle-exclamation

Data issues

chevron-rightColumn names appear as string_field_0, string_field_1, etc.hashtag

Coupler.io uses BigQuery's auto schema detection by default. BigQuery generates generic column names like string_field_0 when it can't infer a proper schema. This happens in two situations:

  1. All columns contain only text values — BigQuery needs at least one boolean, date/time, or numeric field to anchor the schema.

  2. The source dataset is empty — BigQuery has no data to sample.

To fix this:

  • If the dataset was empty, expand your reporting period or remove filters so data is returned, then rerun in Replace mode.

  • If all columns are text, consider defining the schema manually by disabling Autodetect table schema and entering your column definitions as JSON.

See the schema definition guidearrow-up-right for the manual schema format.

chevron-right"BigQuery doesn't allow values of multiple types in a single column"hashtag

This error occurs when a column's data type in BigQuery no longer matches the values Coupler.io is trying to load. This is common when:

  • A source API starts returning a field as a different type (for example, integers sent as 545652.0 instead of 545652)

  • A column contained mixed types across different runs and BigQuery locked in the wrong type

triangle-exclamation

To resolve:

  1. Disable Autodetect table schema in your BigQuery destination settings.

  2. Define the schema manually, setting the affected column to the correct type (for example, FLOAT instead of INTEGER).

  3. If the BigQuery table was auto-created, you may need to delete it and rerun the data flow in Replace mode to recreate it with the correct schema.

chevron-rightA new column added to the source isn't appearing in BigQueryhashtag

When you add a new field to your source (for example, a new Airtable column), you also need to refresh the source schema inside Coupler.io. Open the Source step of your data flow, refresh the field list, confirm the new column is selected, then save and rerun.

If you're using a manually defined schema in BigQuery, you'll also need to add the new column to your schema definition before running — otherwise BigQuery will reject the extra field.

chevron-rightSource row limit reached — only partial data loads into BigQueryhashtag

If fewer rows are arriving in BigQuery than exist in your source, the limit is coming from your Coupler.io plan, not from BigQuery.

Upgrade your Coupler.io plan to increase the row limit for metered sources.

Last updated

Was this helpful?