# Best Practices

## Recommended setup

<table data-card-size="large" data-view="cards"><thead><tr><th></th><th></th></tr></thead><tbody><tr><td><strong>Use Rows for structured data exports</strong></td><td>The Rows entity is the most powerful for reporting. Always pair it with the correct doc ID and table ID — pulling from the wrong table is the most common setup mistake.</td></tr><tr><td><strong>Pull Tables first to discover table IDs</strong></td><td>If you're not sure which table ID to use for the Rows entity, run a Docs + Tables data flow first. This gives you a map of all tables in your doc, including their IDs and row counts.</td></tr><tr><td><strong>Use Append to combine data from multiple Coda docs</strong></td><td>If your team uses separate Coda docs for different projects, use Coupler.io's Append transformation to merge row data from multiple docs into a single unified dataset before sending it to your destination.</td></tr></tbody></table>

## Data refresh and scheduling

<table data-card-size="large" data-view="cards"><thead><tr><th></th><th></th></tr></thead><tbody><tr><td><strong>Sync Coda sync tables after their refresh window</strong></td><td>If you're pulling data from a Coda sync table (one that ingests external data), schedule your Coupler.io data flow to run after the Coda sync has completed — otherwise you may export stale data.</td></tr><tr><td><strong>Run a manual sync first before scheduling</strong></td><td>Always complete a successful manual run before setting up a schedule. This confirms your doc ID, table ID, and API key are all correct before automated runs begin.</td></tr></tbody></table>

## Performance optimization

<table data-card-size="large" data-view="cards"><thead><tr><th></th><th></th></tr></thead><tbody><tr><td><strong>Avoid syncing the entire doc when you only need one table</strong></td><td>Each entity type is a separate data flow. Only pull the entities you actually need — syncing Rows, Tables, Pages, and Docs all at once for every run adds unnecessary API calls and slows things down.</td></tr><tr><td><strong>Use Aggregate to reduce row volume before sending to a destination</strong></td><td>If you're exporting a large Coda table just to count or sum values, apply Coupler.io's Aggregate transformation before the destination step. This keeps your spreadsheet or dashboard clean and reduces load on the destination.</td></tr></tbody></table>

## Common pitfalls

{% hint style="danger" %}
**Don't reuse a single API key across multiple teams or integrations.** If that key is rotated or revoked, all connected data flows will break at once. Create a dedicated API key in Coda for Coupler.io and label it clearly in your Coda account settings.
{% endhint %}

{% columns %}
{% column %}
**Do**

* Generate a dedicated Coda API key for Coupler.io
* Verify doc and table IDs before setting up a Rows data flow
* Use the Tables entity to audit table structure before exporting rows
* Join Rows data with external sources in Coupler.io before sending to BigQuery
  {% endcolumn %}

{% column %}
**Don't**

* Assume column names in the export match Coda display names — they may be column IDs
* Pull all entities in a single data flow if you only need row data
* Export from a filtered view when you need the full table dataset
* Set up a schedule before completing a successful manual run
  {% endcolumn %}
  {% endcolumns %}
