# Best Practices

## Recommended setup

<table data-card-size="large" data-view="cards"><thead><tr><th></th><th></th></tr></thead><tbody><tr><td><strong>Start with Pipelines + Insights metrics</strong></td><td>These two entities give you the most useful CI/CD picture immediately. Use Join to link them on project slug for a combined view of volume and performance.</td></tr><tr><td><strong>Use a dedicated API token</strong></td><td>Create a CircleCI personal API token specifically for Coupler.io rather than reusing a shared or personal token. This makes it easy to revoke access without breaking other integrations.</td></tr><tr><td><strong>Set a realistic start date</strong></td><td>For the initial sync, use a start date 30–90 days back rather than your full history. CircleCI's API pagination means large historical loads are slow, and you can always extend the window later.</td></tr><tr><td><strong>Combine multiple projects with Append</strong></td><td>If you manage several CircleCI projects, add multiple sources to a single data flow and use the Append transformation to unify pipeline or job data across all of them in one destination table.</td></tr></tbody></table>

## Data refresh and scheduling

<table data-card-size="large" data-view="cards"><thead><tr><th></th><th></th></tr></thead><tbody><tr><td><strong>Schedule based on your pipeline cadence</strong></td><td>If your team pushes code multiple times a day, an hourly or every-few-hours refresh keeps your dashboards current. For slower release cycles, daily is usually enough.</td></tr><tr><td><strong>Run Insights separately from pipeline data</strong></td><td>Insights metrics aggregate data on CircleCI's side and update less frequently than raw pipeline records. You can schedule Insights entities less often (e.g., daily) to avoid unnecessary API calls.</td></tr></tbody></table>

## Performance optimization

<table data-card-size="large" data-view="cards"><thead><tr><th></th><th></th></tr></thead><tbody><tr><td><strong>Scope to specific projects and workflows</strong></td><td>Use the Project ID and Workflow ID parameters to narrow your data pull. Fetching org-wide data across all workflows is much slower and returns more noise than you usually need.</td></tr><tr><td><strong>Use Aggregate for duration analysis</strong></td><td>Rather than exporting every raw job record, use Coupler.io's Aggregate transformation to pre-calculate average and p95 job durations grouped by workflow name or branch. This keeps your destination tables lean.</td></tr></tbody></table>

## Common pitfalls

{% hint style="danger" %}
**Don't confuse Project ID with project slug.** Some entities (like Specific projects) need the UUID-format Project ID from your project settings URL, while others accept the slug (e.g., `gh/myorg/myrepo`). Using the wrong format will return an error or empty data.
{% endhint %}

{% columns %}
{% column %}
**Do**

* Specify a project ID or workflow ID when querying Jobs or Workflow jobs
* Check that your API token belongs to a user with access to the target project
* Use the Workflow jobs entity to auto-populate job numbers before pulling the Jobs entity
  {% endcolumn %}

{% column %}
**Don't**

* Leave the start date blank on large orgs — it will try to pull all history
* Share your personal API token across team members or store it in a shared doc
* Query Contexts data unless your token belongs to an org admin account
  {% endcolumn %}
  {% endcolumns %}
