FAQ
Where do I find my Apify API token?
Go to your Apify Console and navigate to Settings > Integrations. Your API token is listed there. Copy the full token and paste it into the Coupler.io source settings when setting up your data flow.
Where do I find the Dataset ID?
In Apify Console, go to Storage > Datasets. Click on the dataset you want to export — the Dataset ID appears in the URL and in the dataset detail panel. It looks like a short alphanumeric string (e.g., RpXUj3dN9BsLmFv4).
Which entity should I use — Item collections or Item collection website content crawlers?
If your actor is Apify's official Website Content Crawler (or a compatible variant), use Item collection website content crawlers — it maps fields like url, title, text, and markdown automatically. For any other actor, use Item collections to get all raw fields as the actor produced them.
Can I export data from multiple Apify actor runs at once?
Yes. Each actor run produces a separate dataset with its own ID. You can create multiple sources within a single Coupler.io data flow — one per Dataset ID — and use the Append transformation to combine them into a single table. This is useful for tracking data across weekly scraping runs, for example.
Can I send Apify scraped content to an AI tool like ChatGPT or Claude?
Yes. Coupler.io supports AI destinations including ChatGPT, Claude, Gemini, Perplexity, Cursor, and OpenClaw. The Item collection website content crawlers entity is especially useful here because it provides a markdown field with clean, formatted page content that AI models handle well.
My actor ran again and the data hasn't updated — why?
Each Apify actor run creates a brand new dataset with a new ID. If your data flow is still pointed at the old Dataset ID, it will return data from the old run. You need to update the Dataset ID in your Coupler.io source settings to point to the new dataset after each run.
Can I use Coupler.io to monitor all datasets in my Apify account?
Yes. Use the Dataset collections entity, which returns a list of all datasets in your account along with metadata like itemCount, modifiedAt, and actRunId. This is a good way to build a dashboard that tracks which actors are running and how much data they're producing.
What happens if my Apify dataset is very large?
Coupler.io paginates through large datasets automatically — you don't need to configure anything special. For very large datasets (hundreds of thousands of rows), we recommend using BigQuery as your destination rather than Google Sheets, which has row limits and can slow down significantly with large payloads.
For a full breakdown of available fields and entities, see Data Overview. If you're running into sync problems, check Common Issues for specific fixes.
Last updated
Was this helpful?
