Best Practices
Recommended setup
Use one entity per data flow for large workspaces
If you have thousands of tasks or many time entries, running Tasks, Time Trackings, and Custom Fields in separate data flows prevents API timeouts and partial syncs.
Always enable "Include closed tasks" for historical reporting
Closed tasks are excluded by default. If you're building completion rate or sprint velocity reports, you need historical data — turn this on in Advanced settings.
Use Tasks + Time Trackings with a Join transformation
Join on task_id to combine task metadata (name, status, assignee) with actual time logged. This is the foundation of most ClickUp productivity and billing reports.
Store Tasks and Time Trackings in separate tables in BigQuery
These are structurally different datasets. Appending them into one table will create schema conflicts. Use separate destinations or separate sheets/tables for each entity.
Data refresh and scheduling
Run a successful manual sync before scheduling
Coupler.io requires a completed manual run before you can activate a schedule. Use the first run to verify your data looks correct before automating.
Stagger schedules across data flows
If you have multiple ClickUp data flows (e.g., Tasks, Time Trackings, Goals), offset their schedule times by 10–15 minutes. This avoids simultaneous API calls that can trigger rate limit errors.
Performance optimization
Use Aggregate transformation for summary reports
Instead of loading raw task data into a spreadsheet and summarizing manually, use Coupler.io's Aggregate transformation to pre-compute totals (e.g., tasks by status, hours by user) before they land in your destination.
Use Append for multi-workspace reporting
If your team uses multiple ClickUp workspaces, connect each with a separate API token and use the Append transformation to combine them into a single unified dataset.
Common pitfalls
Don't use a shared or admin token if it belongs to a user with restricted access to certain spaces. The data export will silently exclude anything that token can't see — with no error message.
Do
Use the personal API token of a user with full workspace access
Enable Include closed tasks for any historical or retrospective analysis
Split large entity sets across multiple data flows
Convert
duration(milliseconds) to hours in your destination or transformation layer
Don't
Run 8+ entities simultaneously in one data flow — it overloads the API
Expect the Lists entity to return tasks — add the Tasks entity separately
Assume deactivated users' time entries will appear in exports
Append Tasks and Time Trackings into the same table — their schemas are incompatible
Last updated
Was this helpful?
