Skip to main content
The Dashboard provides a centralized view of your test suite’s health, performance, and AI usage. Use it to monitor trends, identify issues, and make data-driven decisions about your testing strategy.

Video overview

Dashboard overview
The dashboard is organized into two main tabs designed for different workflows:

Overview Tab

Use the Overview tab for quick health checks and high-level monitoring. It displays essential metrics, trend analysis, and AI usage statistics at a glance.

Key Metrics

The top of the Overview tab shows four essential metrics that summarize your test suite’s current state:
MetricDescription
Total TestsTotal number of test cases with passed/failed breakdown
Pass RatePercentage of successful runs with visual progress bar
Avg DurationAverage execution time per test run
Flaky TestsCount of tests with inconsistent results needing stabilization
Use these metrics to spot drift quickly. A declining pass rate or rising average duration usually indicates issues to investigate.

Test Status Trend

The Test Status Trend chart shows how your test results change over time as a stacked area chart.
  • Green area = Passed tests
  • Red area = Failed tests
  • Gradient fills for visual clarity
  • Hover for exact counts and dates
Sudden spikes in failures often align with a deployment, data change, or environment issue.

Health Score

The Health Score radar chart provides a comprehensive view of your test suite’s health across five dimensions:
MetricWhat It Measures
Pass RatePercentage of tests that passed in the selected period
StabilityHow consistently tests produce the same result (inverse of flakiness)
ReliabilityPass rate trend compared to previous period (100 = same or improved)
HealingPercentage of broken locators successfully healed by AI
AI SuccessSuccess rate of AI-powered test steps
The overall health score is calculated as the average of these five metrics. Hover over any metric to see its detailed description. Focus on improving the metric with the lowest score to maximize overall health.

AI Statistics

The AI Statistics panel provides comprehensive insights into AI-powered test steps:
AI Statistics panel
Success Rate Ring Gauge
  • Large, visual percentage display
  • Gradient glow effect for modern aesthetics
Performance Metrics
  • Avg Time - Average execution time for AI steps
  • Credits Used - Total AI credits consumed
  • Top Feature - Most used AI feature
Credit Distribution
  • Stacked bar chart showing credit usage by feature
  • Interactive legend with hover effects
  • Color-coded by AI feature type
Top Usage List
  • Ranked list of AI features by credit consumption
  • Visual indicators with color-coded borders
  • Hover interactions for detailed insights
Use this data to optimize AI feature usage and manage credit costs effectively.

Test Plans

The Test Plans section displays the health of your scheduled test plans:
ColumnDescription
NAMETest plan name (clickable link to plan details)
PASS / FAILColor-coded counts (green/red)
PASS RATEProgress bar + percentage (green/amber/red based on performance)
RUNSTotal number of runs
ENVEnvironment badge (color-coded by type)
SCHEDULESchedule type (Daily, Weekly, Monthly, Hourly, Manual)
This section helps you monitor automated test runs and ensure your scheduled tests are running as expected.

Health & Issues Tab

Use the Health & Issues tab for detailed failure analysis and troubleshooting. It provides deep insights into test failures, flaky tests, and areas requiring immediate attention.
Health Score radar chart

Failure Breakdown

The Failure Breakdown chart categorizes all test failures to help you identify patterns:
CategoryDescription
AI FailureAI-powered step couldn’t complete
System ErrorSystem-level errors
Insufficient CreditsAI credits exhausted
Custom Code ErrorErrors in custom code execution
Assertion FailedExpected value didn’t match actual value
Locator ErrorSelector couldn’t locate the target element
Network ErrorAPI or network request failed
TimeoutTest exceeded maximum wait time
JavaScript ErrorBrowser console error during test
UnknownUncategorized failure
Interactive Features:
  • Hover over pie slices to expand and highlight legend items
  • Click legend items to highlight corresponding pie slices
  • Each category shows count and percentage
  • Color-coded for quick identification
Focusing on the most common failure category yields the highest impact fixes.
Failure Breakdown chart

Pass Rate Trend

The Pass Rate Trend chart tracks your overall pass rate (0-100%) over time. This helps you:
  • Identify gradual degradation in test stability
  • Correlate pass rate changes with deployments or code changes
  • Set benchmarks for acceptable pass rates
The chart features a green gradient fill with hover tooltips showing date and pass rate details.

Tests Needing Attention

A prominent banner shows the total count of tests that need your attention, combining failed tests and high-flakiness tests into a single prioritized list. Click the banner to open a detailed modal with:
  • Filter tabs: All, Failed, Flaky
  • Table showing Test Name, Type, Status, and Last Run
  • Click any test to navigate directly to its details
This helps you focus on the most critical issues first.
Tests Needing Attention modal

Recent Failures

The Recent Failures list shows the most recent test failures. Each entry includes:
  • Test title (clickable link to test details)
  • Occurrence count badge for repeated failures
  • Failed step type and error message
  • Failure category badge (color-coded)
  • Relative time (e.g., “2h ago”)
Hover over any failure to see:
  • Full error trace
  • Detailed error message
  • Complete logs
Use this to quickly identify and investigate new failures.

Snoozed Tests

The Snoozed Tests section displays tests that have been temporarily suppressed from failure notifications. Each entry shows:
  • Test title (clickable link to test details)
  • Snooze duration (e.g., “Until Feb 15” or “Indefinite”)
  • Time remaining countdown
This helps you track temporarily suppressed tests and unsnooze them when ready to address the underlying issues.

Flaky Tests

The Flaky Tests section lists tests with inconsistent results. Each entry shows:
  • Test title (clickable link to test details)
  • Flakiness badge: Critical (red), High (orange), Medium (yellow), Low (blue)
  • Pass/fail counts and total runs
Flaky tests erode confidence in your test suite. Prioritize fixing tests with “Critical” or “High” flakiness labels. Common causes of flakiness:
  • Race conditions and timing issues
  • Dynamic content that changes between runs
  • Network latency variations
  • Shared test data conflicts
  • Environment differences
Flaky Tests list

Slowest Tests

The Slowest Tests section identifies performance bottlenecks. Each entry shows:
  • Test title (clickable link to test details)
  • Run count for statistical significance
  • Average duration in yellow text
Optimizing slow tests improves overall suite execution time and reduces CI/CD costs.

Filters and Date Range

The dashboard supports several filters to focus your analysis:
FilterOptions
Date rangeToday, Last 7 days, Last 30 days, Custom range
EnvironmentFilter by staging, production, or custom environments
If the charts show no data, try widening the date range or changing the environment filter.

Best Practices

Check the dashboard daily to catch issues early. Focus on:
  • Any new failures since yesterday
  • Pass rate changes
  • New flaky tests
  • Health score trends
The Overview tab is perfect for:
  • Daily health checks at a glance
  • Executive summaries with key metrics
  • High-level trend monitoring
  • AI usage and cost tracking
  • Test plan health monitoring
The Health & Issues tab is ideal for:
  • Detailed failure analysis by category
  • Investigating specific problem areas
  • Managing snoozed tests
  • Prioritizing flaky test fixes
  • Understanding failure patterns
Address issues in this order:
  1. Tests needing attention (failed + high flakiness)
  2. Sudden failure spikes (likely environment or deployment issue)
  3. Critical flaky tests (eroding confidence)
  4. Slowest tests (blocking CI/CD)
  5. Long-term trends (gradual degradation)
For persistent failures:
  • Check failure breakdown for patterns by category
  • Review AI statistics for success rate trends
  • Check health score metrics for specific areas of concern
  • Compare against recent deployments
  • Verify test data and environment variables
  • Use hover tooltips on charts for detailed insights

Troubleshooting

IssueSolution
No data displayedRun a test from the editor or a test plan to generate results
Pass rate dropped suddenlyCheck recent deployments, data changes, or authentication issues
Tests are slower than usualInspect network calls and page transitions in the trace
High flakiness scoresNormalize comparisons (trim, lowercase), review waits and timeouts
AI steps failingCheck AI credits balance, review step configurations
Charts not updatingTry refreshing the page or adjusting the date range filter
Health score is lowFocus on improving the metric with the lowest score in the radar chart