# Common Issues

## Connection issues

<details>

<summary>Authentication fails with "invalid storage account key"</summary>

Your key is incorrect or has expired. Double-check in the Azure Portal:

1. Go to your storage account > **Access keys**
2. Copy the full **primary key** (not the connection string)
3. Paste it exactly into Coupler.io with no extra spaces

If the key still fails, regenerate it in the portal (this invalidates the old key across all services).

</details>

<details>

<summary>"Container not found" error</summary>

Verify the container name matches exactly (case-sensitive). In the Azure Portal, go to your storage account > **Containers** and copy the exact name. Common mistake: including the storage account name as part of the path (you only need the container name, e.g., `mydata`, not `mystorageaccount/mydata`).

</details>

<details>

<summary>Connection works but no files appear</summary>

Check your glob pattern. If you specified `data/*.csv` but files are in the root, change the pattern to `*.csv`. Use `**/*` to match all files recursively across all folders.

</details>

## Missing data

<details>

<summary>"File not found" after sync starts</summary>

The file path or glob pattern doesn't match any blobs in your container. Test your pattern:

1. In the Azure Portal, use Storage Browser to navigate your container
2. Note the exact folder structure and file names
3. Adjust your glob pattern in Coupler.io to match (e.g., `reports/2024/*.csv`)

Also check file extensions — `*.csv` won't match `data.CSV` (case-sensitive on Linux-based systems).

</details>

<details>

<summary>Only the first file syncs when using a glob pattern</summary>

Make sure you're using the glob pattern correctly. For multiple files:

* `data/*.csv` — all CSVs in the `data` folder
* `**/*.parquet` — all Parquets in any subfolder
* `reports/202[34]/*.xlsx` — Excel files in 2023 and 2024 folders

If the pattern is correct but only one file syncs, check that other files exist and match the pattern in the portal.

</details>

<details>

<summary>Data is older than expected</summary>

If you set a start date but old files appear, the start date filters by **file modification time**, not creation time. If a file was last modified before your start date, it won't sync. Clear the start date to sync all files, or update the start date to a point before the files you need.

</details>

## Permission errors

<details>

<summary>"Access denied" or "unauthorized" message</summary>

Your storage account key doesn't have read permissions. In the Azure Portal:

1. Confirm you're using a key from **Access keys** (not a SAS token or connection string)
2. Check that the key is for the correct storage account
3. Verify the storage account isn't behind a firewall that blocks your IP

If a firewall is active, add Coupler.io's IP addresses to the Allow list or temporarily disable the firewall for testing.

</details>

<details>

<summary>"Storage account not found" despite correct name</summary>

The account name might be wrong or in a different region/subscription. In the Azure Portal, confirm:

1. You're in the correct subscription (top-right corner)
2. Your storage account exists in that subscription
3. The account name matches exactly (case-insensitive for storage accounts, but copy it exactly to be safe)

</details>

## Data discrepancies

<details>

<summary>CSV data types are wrong (dates as text, numbers as strings)</summary>

CSV doesn't preserve data types — everything imports as text. If your destination supports it (BigQuery, Snowflake), cast columns after import:

* `CAST(date_column AS DATE)`
* `CAST(amount AS FLOAT64)`

Alternatively, export from Azure as **Parquet** instead of CSV; Parquet preserves native types.

</details>

<details>

<summary>JSONL import truncates or fails on complex nested objects</summary>

Ensure each line in your JSONL file is a **complete, valid JSON object**. Common issues:

* Pretty-printed JSON (multi-line objects) — use a single-line formatter
* Missing commas or quotes
* Unescaped special characters

Test your JSONL file with a JSON validator before uploading to Azure.

</details>

<details>

<summary>Excel import includes extra blank rows or headers appear twice</summary>

Excel sheets sometimes have hidden rows or multiple header rows. In your source file:

1. Remove blank rows at the top
2. Ensure only one header row
3. Delete any hidden rows (select all, right-click, unhide)

Re-export and re-upload to Azure before syncing.

</details>

## Rate limits

<details>

<summary>Sync is slow or times out on large files</summary>

Azure Blob Storage has throughput limits based on your tier (standard, premium, etc.). To optimize:

* Split large files into smaller chunks (e.g., monthly exports instead of yearly)
* Use Parquet instead of CSV for faster parsing
* Schedule syncs during off-peak hours
* Check your storage account's ingress/egress limits in the Azure Portal under **Metrics**

</details>
