Best Practices

Join Applications with Candidates and Jobs

These three entities together give you the most complete pipeline view. Use the Join transformation in Coupler.io to link them on Candidate ID and Job ID — this way every row has full context without needing separate lookups.

Use Archive reasons and Sources as lookup tables

Export Archive reasons and Sources as separate entities, then join them to your Applications data. They contain the human-readable labels that make rejection analysis and source-of-hire reporting meaningful.

Set a start date for ongoing syncs

Once you've done an initial full export, use the start date parameter to limit future syncs to recent records. This keeps your data flow fast and avoids re-processing records that haven't changed.

Data refresh and scheduling

Match refresh frequency to hiring pace

For active hiring campaigns, daily or twice-daily refreshes keep your pipeline data current. For slower periods or headcount planning reports, weekly syncs are usually enough.

Use Append for interview and offer history

If you want to track changes over time — like offer status changes or stage progressions — use the Append transformation to accumulate snapshots rather than overwriting with the latest state.

Performance optimization

Avoid pulling all entities in a single run if not needed

Entities like Candidates and Applications can be large. Only add entities you actually use in your reports — pulling Feedback form definitions or Locations every hour adds overhead without benefit if those rarely change.

Use AI destinations for unstructured analysis

If you want to summarize candidate feedback, identify patterns in archive reasons, or draft hiring summaries, send your Ashby data to ChatGPT, Claude, or Gemini via Coupler.io's AI destinations instead of building complex formulas.

Common pitfalls

triangle-exclamation

Do

  • Join Applications to Candidates and Jobs for complete pipeline rows

  • Use Archive reasons and Sources entities as lookup tables

  • Set a start date after your initial full export to limit re-processing

  • Test with a single entity first before building a multi-entity data flow

Don't

  • Export all entities on a high-frequency schedule if most of them rarely change

  • Overwrite historical data if you need to track stage or offer status changes over time

  • Rely on Coupler.io record counts to match Ashby's built-in reports without checking Ashby's filter defaults first

Last updated

Was this helpful?