Skip to main content
The Dashboard provides a centralized view of your test suite’s health, performance, and AI usage. Use it to monitor trends, identify issues, and make data-driven decisions about your testing strategy.
Dashboard overview

Key Metrics

The top row displays five essential metrics that summarize the current state of your test suite:
MetricDescription
Total TestsTotal number of test cases with passed/failed breakdown
Pass RatePercentage of successful runs with visual progress indicator
Avg DurationAverage execution time per test run
Flaky TestsCount of tests with inconsistent results needing stabilization
Tests Needing AttentionCombined count of failed tests and high-flakiness tests
Use these metrics to spot drift. A declining pass rate or rising average duration usually indicates issues to investigate.

Test Status Trend

The Test Status Trend chart shows how your test results change over time as a stacked area chart.
  • Green area = Passed tests
  • Red area = Failed tests
  • Hover for exact counts and dates
Sudden spikes in failures often align with a deployment, data change, or environment issue.

Pass Rate Trend

The Pass Rate Trend line chart tracks your overall pass rate (0-100%) over time. This helps you:
  • Identify gradual degradation in test stability
  • Correlate pass rate changes with deployments or code changes
  • Set benchmarks for acceptable pass rates

Test Case Status Distribution

Test Case Status, Plan Health, and AI Stats
The Test Case Status pie chart breaks down all test cases by their current status:
StatusColorDescription
DraftGrayTests still being created
PendingBlueTests waiting to run
PassedGreenTests that passed on last run
FailedRedTests that failed on last run
RunningYellowTests currently executing

Test Plan Health

The Test Plan Health section shows the status of your scheduled test plans:
  • Plan name and current status
  • Pass/Fail/Snoozed counts for each plan
  • Schedule (Daily, Weekly, Monthly, Hourly, or Manual)
  • Last run and next occurrence times
This helps you monitor automated test runs and ensure your scheduled tests are running as expected.

AI Steps Statistics

The AI Steps Statistics panel provides insights into AI-powered test steps:
MetricDescription
Total StepsNumber of AI steps executed
Success RatePercentage of AI steps that completed successfully
Avg TimeAverage execution time for AI steps
Credits UsedTotal AI credits consumed
A detailed breakdown shows performance by step type (AI Action, AI Extract, etc.), helping you identify which AI features are most effective.

Failure Categories

Failure Categories breakdown
The Failure Categories chart categorizes all test failures to help you identify patterns:
CategoryDescription
TimeoutTest exceeded maximum wait time
Element Not FoundSelector couldn’t locate the target element
Assertion FailedExpected value didn’t match actual value
Network ErrorAPI or network request failed
JavaScript ErrorBrowser console error during test
AI FailureAI-powered step couldn’t complete
UnknownUncategorized failure
Focusing on the most common failure category yields the highest impact fixes.

Flaky Tests

Flaky Tests list
The Flaky Tests section lists tests with inconsistent results. Each entry shows:
  • Test name with link to details
  • Flakiness label: Critical, High, Medium, or Low
  • Flakiness score: Numeric value (0-100)
  • Flake count: Number of times the test flipped between pass/fail
Flaky tests erode confidence in your test suite. Prioritize fixing tests with “Critical” or “High” flakiness labels.

Common causes of flakiness

  • Race conditions and timing issues
  • Dynamic content that changes between runs
  • Network latency variations
  • Shared test data conflicts
  • Environment differences

Slowest Tests

The Slowest Tests list identifies performance bottlenecks:
  • Test name and average duration
  • Number of runs (for statistical significance)
  • Link to test details
Optimizing slow tests improves overall suite execution time and reduces CI/CD costs.

Recent Failures

Review the most recent test failures. Each entry shows:
  • Failed test name and step
  • Error message and logs
  • Link to full run details with screenshots and video
When analyzing a failure, first confirm behavior with the screenshot or video, then review the trace and logs for selectors, timing, or data issues.

Filters and Date Range

The dashboard supports several filters to focus your analysis:
FilterOptions
Date rangeToday, Last 7 days, Last 30 days, Custom range
EnvironmentFilter by staging, production, or custom environments
If the charts show no data, try widening the date range or changing the environment filter.

Best Practices

Check the dashboard daily to catch issues early. Focus on:
  • Any new failures since yesterday
  • Pass rate changes
  • New flaky tests
Address issues in this order:
  1. Sudden failure spikes (likely environment or deployment issue)
  2. Critical flaky tests (eroding confidence)
  3. Slowest tests (blocking CI/CD)
  4. Long-term trends (gradual degradation)
For persistent failures:
  • Check failure categories for patterns
  • Review AI-powered insights in test details
  • Compare against recent deployments
  • Verify test data and environment variables

Troubleshooting

IssueSolution
No data displayedRun a test from the editor or a test plan to generate results
Pass rate dropped suddenlyCheck recent deployments, data changes, or authentication issues
Tests are slower than usualInspect network calls and page transitions in the trace
High flakiness scoresNormalize comparisons (trim, lowercase), review waits and timeouts
AI steps failingCheck AI credits balance, review step configurations