# FAQ

<details>

<summary>Can I load data into a BigQuery table that doesn't exist yet?</summary>

Yes. If you enter a table name that doesn't exist in your BigQuery dataset, Coupler.io will create it automatically on the first run. You don't need to pre-create the table in BigQuery.

</details>

<details>

<summary>How does Coupler.io determine column types when loading data?</summary>

By default, Coupler.io relies on BigQuery's built-in **schema auto-detection**. BigQuery samples up to 500 rows from your data to infer column types. This works well for most cases, but can produce unexpected results when data contains mixed types or when the source returns an empty dataset.

If you need precise control over types, disable auto-detection and define your schema manually in the destination settings.

</details>

<details>

<summary>What's the best write mode for my use case?</summary>

It depends on what you want to do with the data:

* **Replace** — Best for dashboards and reports that always reflect the current state of your source. Every run overwrites the table with fresh data.
* **Append** — Best for building historical logs or tracking changes over time. Each run adds new rows below the existing data without touching what's already there.

If you're unsure, start with Replace. You can always switch to Append once your pipeline is stable.

</details>

<details>

<summary>How do I prevent schema type conflicts from breaking my data flow?</summary>

The most reliable approach is to disable **Autodetect table schema** and define your column types manually. Auto-detection samples a limited number of rows, so a field that's normally an integer but occasionally returns a float (like `545652.0`) can cause BigQuery to lock in the wrong type.

With a manual schema, you control exactly what type each column expects — and BigQuery won't change it between runs. See the [schema definition guide](https://github.com/coupler-io/knowledge-base/blob/main/destinations/how-to-generate-bigquery-schema.md) for instructions.

{% hint style="info" %}
If your data flow is already broken due to a schema conflict, you may need to delete the existing BigQuery table and rerun in Replace mode to recreate it with the correct schema.
{% endhint %}

</details>

<details>

<summary>What are the rules for BigQuery column names?</summary>

BigQuery column names must follow these rules:

* Only letters (`a–z`, `A–Z`), numbers (`0–9`), and underscores (`_`)
* Must start with a letter or underscore
* Maximum 300 characters
* Cannot begin with the reserved prefixes `_TABLE_`, `_FILE_`, or `_PARTITION_`

If your source has column names with spaces or special characters, use Coupler.io's transformation step to rename them before they reach BigQuery.

</details>

<details>

<summary>Can I force all columns to load as STRING type?</summary>

Yes. The most reliable way is to disable auto-detection and set all columns to `STRING` in your manual schema definition. Alternatively, you can structure your source data so that BigQuery's auto-detection infers strings — but manual schema definition is more predictable and easier to maintain.

</details>

<details>

<summary>Does Coupler.io support multiple sources sending data to the same BigQuery table?</summary>

Yes. A single Coupler.io data flow can include multiple sources. You can use **Append** or **Join** transformations to combine data from different sources before it lands in BigQuery. This lets you merge data from different tools into a single table without writing any code.

</details>

<details>

<summary>Where can I learn more about setting up the GCP Service Account and key file?</summary>

{% hint style="info" %}
Coupler.io has a dedicated guide on generating a Google Cloud JSON key file, including step-by-step instructions for creating a Service Account and assigning the correct roles. Look for it in the BigQuery setup section of the Help Center.
{% endhint %}

</details>
