Skip to main content

Overview

Run History provides a complete record of all test executions. Track test performance over time, identify patterns in failures, and access detailed results for debugging. Use run history to understand test reliability and maintain quality.

Accessing Run History

Test-Level History

View runs for a specific test:
  1. Open a test case
  2. Click the Runs tab
  3. See all executions for this test

Test Plan History

View runs for a test plan:
  1. Open a test plan
  2. Click Run History
  3. See all plan executions

Global Run History

View all runs across your organization:
  1. Navigate to Runs in the main menu
  2. Filter by date, status, or test
  3. Browse all executions

Run Information

Run Status

Each run has a status:
StatusDescription
PassedAll steps completed successfully
FailedOne or more steps failed
RunningCurrently executing
QueuedWaiting to execute
CancelledManually stopped
Timed OutExceeded maximum duration

Run Details

Click on any run to see comprehensive execution details with tabs for:
  • Summary: Pass/fail status, duration, timestamp, and key metrics
  • Steps: Individual step results with status indicators
  • Video: Full execution recording with playback controls
  • Screenshots: Step-by-step images captured during execution
  • Trace: Detailed execution timeline with actions, waits, and network requests
  • Logs: Test execution logs with both AI-generated summaries and raw technical output
  • AI Execution: Detailed logs from AI-driven steps showing agent decisions, AI assertions, and auto-healing actions
  • Inbox: Emails and TOTP codes captured during the test run

Run Metadata

Each run includes:
FieldDescription
Run IDUnique identifier
Started AtExecution start time
DurationTotal execution time
EnvironmentEnvironment used
TriggerHow run was started
BrowserBrowser type used

Filtering Runs

By Status

Filter to specific outcomes:
  • Passed: View successful runs
  • Failed: View failures for debugging
  • All: See complete history

By Date Range

View runs within a timeframe:
  • Last 24 hours
  • Last 7 days
  • Last 30 days
  • Custom range

By Trigger Type

Filter by how runs were started:
TriggerDescription
ManualRun button clicked
EditorRun from test editor
ScheduledTest plan scheduled run
APITriggered via API
CI/CDTriggered from pipeline

By Environment

View runs in specific environments:
  • Development
  • Staging
  • Production
  • Custom environments

Analyzing Runs

Comparing Runs

Compare multiple runs:
  1. Select runs to compare
  2. View side-by-side results
  3. Identify differences in:
    • Step outcomes
    • Duration changes
    • Error patterns

Identifying Patterns

Look for trends:
  • Consistent failures: Same step failing repeatedly
  • Intermittent failures: Flaky test indicators
  • Duration changes: Performance regressions
  • Environment-specific: Failures in certain environments

Reliability Metrics

Track test health:
MetricDescription
Pass ratePercentage of passing runs
Avg durationAverage execution time
FlakinessRate of inconsistent results
Last passMost recent successful run
Last failMost recent failed run

Runs Page Analytics

The Runs tab in test case details provides comprehensive analytics and visual insights into test execution history. It includes two sub-tabs: History and Insights.

History Tab

The History tab shows a chronological list of all test executions:
  • Run Status: Visual indicators for passed, failed, running, queued, cancelled, or timed out runs
  • Execution Details: Duration, timestamp, environment, and trigger type for each run
  • Quick Actions: Re-run, view details, or compare runs
  • Filtering: Filter by status, date range, environment, or trigger type
Click any run to access detailed execution information including video, screenshots, trace, logs, AI execution details, and inbox items.

Insights Tab

The Insights tab provides AI-powered analytics and actionable intelligence about your test’s reliability and failure patterns: Light mode Runs Insights tab showing analytics and AI-powered insights

Key Metrics

Flakiness Score
  • Percentage indicating test reliability (0% = stable, 100% = highly flaky)
  • Color-coded severity indicator (green = low, yellow = medium, red = high)
  • Helps prioritize which tests need attention and stabilization
  • Click the info icon to learn more about how flakiness is calculated
Pass Rate
  • Success percentage across recent runs
  • Shows exact count: “X pass / Y fail”
  • Visual bar chart with green (pass) and red (fail) segments
  • Helps quickly assess overall test health
Average Duration
  • Typical execution time based on stable (passing) runs
  • Excludes failed runs to provide accurate performance baseline
  • Useful for identifying performance regressions
  • Click the info icon for calculation details
Run Timeline
  • Visual dot chart showing pass/fail patterns from oldest to newest
  • Green dots = passed runs, Red dots = failed runs
  • Quick visual assessment of test stability trends
  • Hover over dots to see individual run details

Failure Analysis

Top Failing Steps
  • Lists the most frequently failing steps in descending order
  • Shows step type, target/URL, and exact error message
  • Displays occurrence count and percentage (e.g., “1x (10%)”)
  • Helps identify which specific steps need fixing or stabilization
  • Expandable cards show full error details
Failure Categories
  • Breakdown of failure types across all failed runs
  • Common categories: Network Error, Timeout, Assertion Failed, Element Not Found, etc.
  • Visual distribution bars show relative frequency
  • Click categories to filter Top Failing Steps by that failure type
  • Helps identify systemic issues vs. step-specific problems

AI-Powered Insights

The AI-Powered Insights section analyzes all run data to provide actionable intelligence: Summary
  • Natural language explanation of why the test is failing
  • Synthesizes patterns across multiple runs
  • Provides context about failure frequency and impact
Root Causes
  • AI-identified underlying reasons for test failures
  • Goes beyond surface-level errors to find systemic issues
  • Examples: “Network connection reset error preventing access to the test URL”
  • Helps focus debugging efforts on the actual problem
Recommendations
  • Actionable suggestions to improve test stability
  • Specific steps you can take to fix failures or reduce flakiness
  • Examples: “Verify network stability and ensure the test environment has reliable internet access”
  • Prioritized based on impact and feasibility
Identification Patterns
  • Common failure patterns detected across runs
  • Shows how many times each pattern occurred (e.g., “Network Connectivity Issue (2x)”)
  • Helps identify whether failures are consistent or varied
Regenerate Button
  • Refresh AI analysis with the latest run data
  • Useful after fixing issues or adding new runs
  • Shows generation timestamp so you know when insights were last updated

Using Insights Effectively

  1. Assess Test Health: Check the Flakiness Score and Pass Rate to understand overall reliability
  2. Identify Problem Areas: Review Top Failing Steps to see which steps fail most frequently
  3. Understand Failure Types: Use Failure Categories to see if issues are network-related, timing-based, or assertion failures
  4. Get AI Guidance: Read Root Causes and Recommendations for expert analysis and next steps
  5. Track Improvements: Regenerate insights after fixes to validate improvements
  6. Prioritize Work: Use flakiness scores across tests to decide which need attention first
The Insights tab combines quantitative metrics, failure analysis, and AI intelligence to help you quickly understand why tests fail and how to fix them.

Run Actions

Re-run a Test

Repeat a previous execution:
  1. Open the run details
  2. Click Re-run
  3. Test executes with same configuration

Re-run Failed Steps

Re-run only failed steps:
  1. Open a failed run
  2. Click Re-run Failed
  3. Only failed steps execute

Cancel Running Tests

Stop an in-progress run:
  1. Find the running test
  2. Click Cancel
  3. Run stops immediately

Delete Runs

Remove old run data:
  1. Select runs to delete
  2. Click Delete
  3. Confirm removal
Note: Deleted runs cannot be recovered.

Run Artifacts

Video Recordings

Access execution videos:
  1. Open run details (from the Runs tab, click on a specific run)
  2. Click the Video tab
  3. Play or download recording
Video tips:
  • Scrub to failure point
  • Watch at 2x speed for quick review
  • Download for sharing

Screenshots

View step screenshots:
  1. Open run details (from the Runs tab, click on a specific run)
  2. Click the Screenshots tab
  3. Navigate through images
Screenshot uses:
  • Verify correct page loaded
  • Check element visibility
  • Compare expected vs actual

Trace Files

Inspect detailed traces:
  1. Open run details (from the Runs tab, click on a specific run)
  2. Click the Trace tab
  3. Open Trace Viewer
Trace analysis:
  • Review network requests
  • Inspect DOM at each step
  • Check console messages
  • Analyze timing

Logs

Access execution logs:
  1. Open run details (from the Runs tab, click on a specific run)
  2. Click the Logs tab to view test execution logs
  3. Switch between AI-generated summaries and raw technical logs
  4. Search for errors or patterns

AI Execution Logs

View detailed AI execution information:
  1. Open run details (from the Runs tab, click on a specific run)
  2. Click the AI Execution tab
  3. Review agent decisions, AI assertions, and auto-healing actions

Inbox

Access emails and TOTP codes:
  1. Open run details (from the Runs tab, click on a specific run)
  2. Click the Inbox tab
  3. View emails, extract verification codes, and access links

Notifications

Failure Alerts

Get notified on failures:
  1. Go to Settings > Notifications
  2. Enable failure alerts
  3. Choose delivery method (email, Slack)
  4. Receive alerts on test failures

Summary Reports

Receive periodic summaries:
  • Daily test results
  • Weekly reliability reports
  • Monthly trend analysis

Best Practices

Regular Review

  • Check run history daily for failures
  • Review flaky tests weekly
  • Analyze trends monthly

Debugging Workflow

  1. Identify failed run
  2. Check error message in logs
  3. Watch video at failure point
  4. Review screenshot before failure
  5. Inspect trace for details
  6. Fix and re-run

Managing History

  • Delete old runs to manage storage
  • Export important runs before deletion
  • Keep runs for compliance requirements

Using Filters Effectively

  • Start broad, then narrow
  • Save common filter combinations
  • Use date ranges for trend analysis

Troubleshooting

Runs Not Appearing

  1. Refresh the run history page
  2. Check filter settings aren’t hiding runs
  3. Verify test actually executed
  4. Check for system issues

Missing Artifacts

  1. Verify run completed (not cancelled)
  2. Check if artifacts were enabled
  3. Wait for upload/processing
  4. Re-run if artifacts missing

Old Runs Unavailable

  1. Check retention policy
  2. Verify runs weren’t deleted
  3. Contact admin for archived data

Performance Issues

If run history loads slowly:
  1. Narrow date range filter
  2. Filter by specific tests
  3. Clear browser cache
  4. Reduce items per page