Best Practices
Recommended setup
Use an admin API key
Always connect Coupler.io using the API key of an AgileCRM admin user. This ensures you can access all entities without hitting permission restrictions.
Test with a manual run first
Before enabling automatic scheduling, run your data flow manually to verify the data looks correct. Check field names, record counts, and timestamp formats in your destination.
Convert timestamps at the destination
AgileCRM returns Unix timestamps in milliseconds. Plan your timestamp conversion in Google Sheets, BigQuery, or your BI tool before building reports on top of the data.
Data refresh and scheduling
Match refresh frequency to how often data changes
High-activity entities like Tickets and Tasks may benefit from more frequent syncs. Slower-moving entities like Companies or Milestones can be synced less often to stay within API rate limits.
Schedule syncs during off-peak hours
Running data flows overnight or outside business hours reduces the chance of hitting AgileCRM's API rate limits and avoids impacting your team's CRM performance.
Don't run multiple AgileCRM data flows simultaneously
Each data flow makes separate API calls. Running several at once increases your risk of rate-limit errors. Stagger your schedules by at least a few minutes.
Performance optimization
Only import the entities you need
Each active data flow consumes API quota. If you don't actively use Documents or Ticket Filters in your reporting, skip those data flows to keep things lean.
Use destination-side filtering for large datasets
AgileCRM's API doesn't support advanced filtering at the source. Pull all records into your destination and apply filters there (e.g., using BigQuery views or spreadsheet formulas).
Common pitfalls
Never share your AgileCRM API key publicly or commit it to a code repository. Anyone with your API key and domain can access your CRM data. If a key is exposed, regenerate it immediately in AgileCRM's Admin Settings.
Do
Use an admin user's API key for full access
Convert Unix timestamps before building reports
Stagger sync schedules to avoid rate limits
Test with a manual run before enabling automation
Keep one importer per entity for easier maintenance
Don't
Use a restricted or read-only user's API key
Assume timestamps are human-readable out of the box
Run all your data flows at the same scheduled time
Build dashboards directly on top of unvalidated imports
Import entities you don't need just to have the data
Last updated
Was this helpful?
