Best Practices
Recommended setup and configuration
Create one flow per entity
Each 100ms entity (Sessions, Rooms, Recordings, Analytics events) should have its own Coupler.io data flow. Mixing entities into one flow isn't supported — each flow exports a single entity type. Keep flows clearly named so you can identify them at a glance.
Use Sessions as your primary entity
Sessions contains the richest data including participant information, timestamps, and room references. Start here for usage analysis, then enrich with Recordings or Analytics events as needed.
Set the start date intentionally
Use the date picker to set a start date that aligns with when your 100ms integration went live — or the beginning of the period you want to analyze. Avoid setting it too far back if you have a large session volume, as this increases sync time significantly.
Use the Management Token, not peer tokens
Always authenticate with your 100ms Management Token (found in the 100ms dashboard under Settings → Developer). App tokens or peer tokens don't have the permissions needed to access session and room data via the server-side API.
Data refresh and scheduling
Schedule according to reporting cadence
For daily dashboards, a once-daily refresh is sufficient for Sessions, Rooms, and Recordings. Analytics events may warrant more frequent refreshes (every few hours) if you're monitoring engagement in near real time.
Don't over-refresh Active room peers
The Active room peers entity reflects live state and changes every few minutes. Refreshing it too frequently is wasteful unless you're actively monitoring concurrency. For most use cases, skip this entity in scheduled flows and query it on demand.
Performance optimization
Narrow your date range for large workspaces
If you have a high-volume 100ms workspace, exporting all sessions from day one can be very slow. Set your start date to the beginning of the current reporting period (e.g., the first of the current month) and run a one-time historical backfill separately if needed.
Use BigQuery or Redshift for large datasets
If you're exporting thousands of sessions or millions of analytics events, route the data to a data warehouse (BigQuery, Redshift) rather than Google Sheets. Sheets has a row limit of 10 million cells and can become slow to load with large datasets.
Dashboard accuracy
Join Sessions and Rooms by room_id
Sessions contain a room_id field that links back to the Rooms entity. If you need to enrich session data with room names or template information, join these two exports on room_id in your destination or BI tool.
Don't sum session durations without filtering
If you're calculating total platform usage, filter out very short sessions (under 5–10 seconds) that may represent connection tests or failed joins. Including these can inflate your total session time metric.
Do's and Don'ts
Do
Create separate flows for Sessions, Recordings, and Analytics events
Set a meaningful start date aligned to your reporting period
Use BigQuery or Redshift for high-volume analytics event exports
Join Sessions to Rooms on
room_idfor enriched reporting
Don't
Use Active room peers for historical analysis — it's live state only
Set your start date to the beginning of time for large workspaces
Expect real-time data — allow a few minutes processing time for recent sessions and events
Mix entities in a single flow — each entity requires its own data flow
Last updated
Was this helpful?
