# Best Practices

## Recommended setup

<table data-card-size="large" data-view="cards"><thead><tr><th></th><th></th></tr></thead><tbody><tr><td><strong>Use an admin API key</strong></td><td>Always connect Coupler.io using the API key of an AgileCRM admin user. This ensures you can access all entities without hitting permission restrictions.</td></tr><tr><td><strong>Test with a manual run first</strong></td><td>Before enabling automatic scheduling, run your data flow manually to verify the data looks correct. Check field names, record counts, and timestamp formats in your destination.</td></tr><tr><td><strong>Convert timestamps at the destination</strong></td><td>AgileCRM returns Unix timestamps in milliseconds. Plan your timestamp conversion in Google Sheets, BigQuery, or your BI tool before building reports on top of the data.</td></tr></tbody></table>

## Data refresh and scheduling

<table data-card-size="large" data-view="cards"><thead><tr><th></th><th></th></tr></thead><tbody><tr><td><strong>Match refresh frequency to how often data changes</strong></td><td>High-activity entities like Tickets and Tasks may benefit from more frequent syncs. Slower-moving entities like Companies or Milestones can be synced less often to stay within API rate limits.</td></tr><tr><td><strong>Schedule syncs during off-peak hours</strong></td><td>Running data flows overnight or outside business hours reduces the chance of hitting AgileCRM's API rate limits and avoids impacting your team's CRM performance.</td></tr><tr><td><strong>Don't run multiple AgileCRM data flows simultaneously</strong></td><td>Each data flow makes separate API calls. Running several at once increases your risk of rate-limit errors. Stagger your schedules by at least a few minutes.</td></tr></tbody></table>

## Performance optimization

<table data-card-size="large" data-view="cards"><thead><tr><th></th><th></th></tr></thead><tbody><tr><td><strong>Only import the entities you need</strong></td><td>Each active data flow consumes API quota. If you don't actively use Documents or Ticket Filters in your reporting, skip those data flows to keep things lean.</td></tr><tr><td><strong>Use destination-side filtering for large datasets</strong></td><td>AgileCRM's API doesn't support advanced filtering at the source. Pull all records into your destination and apply filters there (e.g., using BigQuery views or spreadsheet formulas).</td></tr></tbody></table>

## Common pitfalls

{% hint style="danger" %}
Never share your AgileCRM API key publicly or commit it to a code repository. Anyone with your API key and domain can access your CRM data. If a key is exposed, regenerate it immediately in AgileCRM's Admin Settings.
{% endhint %}

{% columns %}
{% column %}
**Do**

* Use an admin user's API key for full access
* Convert Unix timestamps before building reports
* Stagger sync schedules to avoid rate limits
* Test with a manual run before enabling automation
* Keep one importer per entity for easier maintenance
  {% endcolumn %}

{% column %}
**Don't**

* Use a restricted or read-only user's API key
* Assume timestamps are human-readable out of the box
* Run all your data flows at the same scheduled time
* Build dashboards directly on top of unvalidated imports
* Import entities you don't need just to have the data
  {% endcolumn %}
  {% endcolumns %}
