
Key Metrics
The top row displays five essential metrics that summarize the current state of your test suite:| Metric | Description |
|---|---|
| Total Tests | Total number of test cases with passed/failed breakdown |
| Pass Rate | Percentage of successful runs with visual progress indicator |
| Avg Duration | Average execution time per test run |
| Flaky Tests | Count of tests with inconsistent results needing stabilization |
| Tests Needing Attention | Combined count of failed tests and high-flakiness tests |
Test Status Trend
The Test Status Trend chart shows how your test results change over time as a stacked area chart.- Green area = Passed tests
- Red area = Failed tests
- Hover for exact counts and dates
Pass Rate Trend
The Pass Rate Trend line chart tracks your overall pass rate (0-100%) over time. This helps you:- Identify gradual degradation in test stability
- Correlate pass rate changes with deployments or code changes
- Set benchmarks for acceptable pass rates
Test Case Status Distribution

| Status | Color | Description |
|---|---|---|
| Draft | Gray | Tests still being created |
| Pending | Blue | Tests waiting to run |
| Passed | Green | Tests that passed on last run |
| Failed | Red | Tests that failed on last run |
| Running | Yellow | Tests currently executing |
Test Plan Health
The Test Plan Health section shows the status of your scheduled test plans:- Plan name and current status
- Pass/Fail/Snoozed counts for each plan
- Schedule (Daily, Weekly, Monthly, Hourly, or Manual)
- Last run and next occurrence times
AI Steps Statistics
The AI Steps Statistics panel provides insights into AI-powered test steps:| Metric | Description |
|---|---|
| Total Steps | Number of AI steps executed |
| Success Rate | Percentage of AI steps that completed successfully |
| Avg Time | Average execution time for AI steps |
| Credits Used | Total AI credits consumed |
Failure Categories

| Category | Description |
|---|---|
| Timeout | Test exceeded maximum wait time |
| Element Not Found | Selector couldn’t locate the target element |
| Assertion Failed | Expected value didn’t match actual value |
| Network Error | API or network request failed |
| JavaScript Error | Browser console error during test |
| AI Failure | AI-powered step couldn’t complete |
| Unknown | Uncategorized failure |
Flaky Tests

- Test name with link to details
- Flakiness label: Critical, High, Medium, or Low
- Flakiness score: Numeric value (0-100)
- Flake count: Number of times the test flipped between pass/fail
Common causes of flakiness
- Race conditions and timing issues
- Dynamic content that changes between runs
- Network latency variations
- Shared test data conflicts
- Environment differences
Slowest Tests
The Slowest Tests list identifies performance bottlenecks:- Test name and average duration
- Number of runs (for statistical significance)
- Link to test details
Recent Failures
Review the most recent test failures. Each entry shows:- Failed test name and step
- Error message and logs
- Link to full run details with screenshots and video
Filters and Date Range
The dashboard supports several filters to focus your analysis:| Filter | Options |
|---|---|
| Date range | Today, Last 7 days, Last 30 days, Custom range |
| Environment | Filter by staging, production, or custom environments |
Best Practices
Daily Review
Daily Review
Check the dashboard daily to catch issues early. Focus on:
- Any new failures since yesterday
- Pass rate changes
- New flaky tests
Prioritize by Impact
Prioritize by Impact
Address issues in this order:
- Sudden failure spikes (likely environment or deployment issue)
- Critical flaky tests (eroding confidence)
- Slowest tests (blocking CI/CD)
- Long-term trends (gradual degradation)
Root Cause Analysis
Root Cause Analysis
For persistent failures:
- Check failure categories for patterns
- Review AI-powered insights in test details
- Compare against recent deployments
- Verify test data and environment variables
Troubleshooting
| Issue | Solution |
|---|---|
| No data displayed | Run a test from the editor or a test plan to generate results |
| Pass rate dropped suddenly | Check recent deployments, data changes, or authentication issues |
| Tests are slower than usual | Inspect network calls and page transitions in the trace |
| High flakiness scores | Normalize comparisons (trim, lowercase), review waits and timeouts |
| AI steps failing | Check AI credits balance, review step configurations |





