Run Lifecycle & Troubleshooting
Every prompt execution produces a run. Runs store the raw AI response, analysis output, citations, and diagnostic metadata that help you understand what happened behind the scenes.
Statuses
| Status | Meaning |
|---|---|
| Queued | Run is scheduled and will start shortly |
| Running | Chatobserver is currently collecting answers |
| Succeeded | Responses were captured and processed |
| Failed | The run ended with an error |
Analysis status
After a successful run, Chatobserver executes enrichment jobs (ranking, sentiment, etc.). Analysis statuses include:
Pending– Enrichment job scheduledComplete– Structured data readyFailed– Enrichment could not finish; raw markdown is still available
Inspecting a run
- Open the Runs tab.
- Click any row to slide open the Run details sheet.
- Review:
- Rendered markdown – The AI response as delivered to end users.
- Sources – Normalized citations, including title, domain, and excerpt.
- Timeline – When the run was queued, started, completed, and synced.
- Errors – Any message returned while processing the run.
Use the “Copy prompt” action to recreate the run’s prompt in a new tab for testing.
Common failure reasons
- Platform throttling – The AI provider rejected the request. Retry after the cooling-off period.
- Expired credentials – Update any third-party credentials you use to fetch AI answers on behalf of Chatobserver.
- Credit hold – The workspace exhausted its monthly credit allotment. Visit Plan & usage to add credits.
- Prompt removed – If a prompt is deleted while a run is in progress, the run will fail.
Retrying runs
- Click Retry run inside the details sheet. This creates a new run with the same prompt configuration.
- If the issue persists, confirm platform credentials and credit balance before retrying again.
Exporting data
- Use the table export action to download the current page of runs as CSV. The export includes prompt label, run status, start/completion timestamps, and trigger.
- For programmatic access, the External API
exposes a
GET /prompt-runsendpoint with the same fields.
Best practices
- Keep an eye on the analysis status pill to confirm downstream jobs finish.
- Leverage groups to triage failures by campaign—filter the Runs tab via the Prompts tab before drilling in.
- Document your retry policy so teammates know when it’s safe to rerun without inflating credit usage unexpectedly.