Overview
When a test case fails, knowing what to do next is crucial for debugging and fixing them. Supatest provides comprehensive debugging tools like video recordings, step-aligned screenshots, raw logs, AI logs, and trace—that help you quickly identify root causes and apply fixes. This guide outlines a systematic approach to debugging test failures, from initial triage to resolution.Common failure patterns
Understanding typical failure patterns helps you diagnose issues faster. Most test failures fall into four categories.Timeout failures
Tests exceed their allotted time when elements take longer than expected to appear, network responses are slow, or page loads are heavy. Locator timeouts can occur when the locator is correct but the element takes too long to appear due to UI flakiness or when locator itself is incorrect, slow rendering, or unexpected behavior. You’ll see timeout errors in logs:Locator failures
Element locators become invalid when UI changes break them, dynamic content changes attributes, or race conditions cause elements to appear and disappear. Locator timeout errors can also occur when the selector itself is incorrect and the element never appears because the locator doesn’t match. Look for these errors in logs:Note: Locator timeout errors appear in both timeout and locator categories because they can have two causes: (1) the locator is incorrect for an element that is visible on the UI, or (2) the locator is correct but the element takes too long to appear due to UI flakiness or slow loading. Use screenshots and video to distinguish between these cases.
Test design issues
Flaws in test logic include incorrect test data or environment variables, missing prerequisites or setup steps, flaky assertions based on timing rather than state, and incorrect flow assumptions. These manifest as tests passing inconsistently, failures occurring at different steps, or test logic that doesn’t match actual application behavior.Network and state issues
Problems arise from API failures or slow responses, authentication or session expiration, browser state inconsistencies, or third-party service outages. You’ll see network errors in logs:Step-by-step debugging process
Follow this systematic approach when a test fails.Step 1: Review the video recording
Start with the video to understand the execution flow. In the test case page, open the Last Run Report tab, then the Video sub-tab to see the executed run recording. Scrub to the failure point to identify the exact moment the test failed. Look for:- Did the page load correctly?
- Were all previous steps executed as expected?
- What was the UI state at the moment of failure?
- Was there any unexpected behavior or error messages?
Step 2: Examine the failure screenshot
The screenshot captured at the failure point provides immediate visual context. In the Last Run Report tab, open the Screenshots sub-tab and locate the failing step’s screenshot. Ask yourself:- Is the element visible that the test tried to interact with?
- Are there any error messages or warnings displayed?
- Is the page fully loaded, or are there loading spinners?
- Does the URL match the expected navigation target?
Step 3: Analyze Raw Logs
Raw Logs contain the technical details needed for precise diagnosis. In the Last Run Report tab, open the Logs sub-tab and switch to the raw logs view. Scan for error keywords like “timeout”, “not found”, “detached”, or “error”. Check the exact error message and stack trace, review timing information to note when the error occurred relative to step start, and look for patterns to see if similar errors occurred in previous steps. Common error patterns by category: Test-level timeouts:Tip: If the raw logs are unclear or you need a higher-level explanation, proceed to Step 4 to generate AI Logs.
Step 4: Generate and Review AI Logs (if needed)
AI Logs provide a human-readable summary, but they are generated on-demand. Use them when Raw Logs are unclear or you need a higher-level explanation. In the Last Run Report tab, open the Logs sub-tab and toggle the “Explain With AI” button. Let it generate if they haven’t been created yet, then read the summary to understand the high-level explanation of the failure. AI Logs provide:- Concise explanation of the failure
- Grouped insights for faster scanning
- Suggested root causes
- Recommended fixes or workarounds
Step 5: Inspect Trace for timing and state
Trace provides a detailed timeline of actions, waits, and network activity. In the Last Run Report tab, open the Trace sub-tab, then navigate to the failure point using the timeline to jump to when the step failed. Trace reveals:- Exact timing of each action
- Network request/response cycles
- Wait conditions and their outcomes
- DOM snapshots at key points
- Console messages and errors
Tip: Use Trace to identify timing issues. If a step fails immediately after a navigation, check if the page fully loaded before the next action.
Step 6: Review Run Insights for patterns and AI guidance
When debugging recurring failures or flaky tests, the Insights tab provides comprehensive analytics and AI-powered recommendations:- Navigate to the Runs tab in your test case
- Click the Insights sub-tab
- Review the analytics dashboard
- Check the Flakiness Score to understand how reliable your test is
- Review the Pass Rate to see the success ratio across recent runs
- Examine the Run Timeline dots to spot patterns (e.g., alternating passes/fails indicate flakiness)
- Review Top Failing Steps to see which steps fail most frequently
- Check if one step fails repeatedly (locator issue) or if multiple steps fail (broader problem)
- Note the percentage of failures for each step to prioritize fixes
- Examine Failure Categories to see the distribution of error types
- Network errors suggest connectivity or API issues
- Timeouts indicate performance problems or incorrect wait conditions
- Assertion failures point to test logic or application behavior changes
- Read the Summary for a high-level explanation of why the test is failing
- Review Root Causes to understand underlying issues beyond surface errors
- Follow Recommendations for specific, actionable steps to fix failures
- Check Identification Patterns to see if failures are consistent or varied
Tip: Use the Regenerate button after making fixes to get updated insights and validate that your changes improved test reliability.
Step 7: Apply a fix
Choose the appropriate solution based on the root cause you’ve identified.Fixing timeout issues
When elements take too long to appear, increase the step-level timeout in test case settings. The default is 10 seconds, and this timeout is the maximum time each step will wait to be executed. Increase it to accommodate slower operations or elements that take longer to appear. Add explicit waits by replacing fixed delays with state-based waits. Use “Wait for Element” steps before interactions, waiting for specific conditions like visible, attached, or text present. Example: If a step times out waiting for an element and the screenshot shows the element is present or appears slowly, add a “Wait for Timeout” step before the interaction, or increase the step-level timeout. This indicates the locator is correct but the element takes time to appear.Fixing locator issues
The best way to handle locator changes and flakiness is to enable Auto-healing in your test case settings. When enabled, auto-healing automatically recovers from locator failures at runtime. Here’s how it works: When a step fails due to a locator issue, the system automatically tries all fallback locators that were captured for that step. This happens regardless of whether auto-healing is enabled or not. If all fallback locators also fail, that’s when auto-healing kicks in: AI takes over and attempts to execute the action by analyzing the page, finding the correct element, and performing the required interaction. For dynamic or flaky elements: If you have steps with dynamic locators that change frequently, you have two options. Either keep the step as-is with auto-healing enabled, or replace it with an AI Action step. AI Action handles dynamic and flaky elements more effectively by adapting to UI changes and finding elements even when attributes change. If auto-healing and AI Action still fail: When auto-healing is enabled but the step continues to fail, or when AI Action is also flaky, use the Recorder to update the step with the latest locator, or use the Element Picker to select the element again and generate a fresh locator. This ensures you have the most current selector for the element. For strict-mode violations: When a locator resolves to multiple elements, make the selector more specific by combining attributes, using parent-child relationships, or using the Element Picker to generate a unique selector. Example: Ifbutton.submit-btn fails repeatedly even with auto-healing enabled, use the Element Picker to generate a new selector, or replace the step with AI Action which can handle dynamic elements automatically.
Fixing test design issues
Add missing prerequisites by ensuring login or setup steps are present, verifying environment variables are set correctly, and adding steps to navigate to the correct starting state. Fix test data by verifying environment variables resolve correctly, checking that expressions evaluate as expected, and ensuring test data matches application requirements. Example: If a test fails because it assumes a logged-in state, add a login snippet or verify authentication before proceeding.Step 8: Validate the fix
Test your changes incrementally. Start with Run Single Step to execute just the fixed step and verify the change works. Then use Run Until Here to run from the failing step onward and test the fix in context. Finally, use Run All to execute the complete test case and ensure no regressions. The editor provides powerful run tools for iterative debugging: Run Single Step: Hover over a step and click the “Run Single Step” button (play icon) to execute only that step while preserving browser state. This is the fastest way to test a fix without running the entire test, ideal when you’ve made a small change and want immediate feedback. Run Until Here: Hover over a step and click “Run Until Here” (fast-forward icon) to execute from that step to the end. This is useful when you’ve fixed an early step and want to test the rest, saving time by skipping steps that already passed. Run All: Click “Run All” to start a fresh browser session and execute all steps. This internally creates a new browser session and should be used when state issues are suspected or you need a clean environment.Debugging workflow summary
Here’s a quick reference checklist for debugging:- ✅ Video: Watch the execution to understand what happened
- ✅ Screenshot: Check the UI state at the failure point
- ✅ Raw Logs: Find the exact error message and technical details (available immediately)
- ✅ AI Logs: Generate and read the summary if Raw Logs are unclear (on-demand)
- ✅ Trace: Analyze timing, network, and action sequence
- ✅ Run Insights: Review analytics, failure patterns, and AI recommendations for recurring issues
- ✅ Fix: Apply the appropriate solution based on the failure pattern
- ✅ Validate: Use Run Single Step, then Run From Step, then Run All
Troubleshooting
Video not available
- Ensure the test run completed (not cancelled)
- Check if video upload completed successfully
- Wait a few moments for video processing
- Re-run the test if video is missing
Screenshots missing
- Verify the step actually executed
- Check if the run completed successfully
- Ensure screenshot capture is enabled
- Re-run the test to generate screenshots
Logs empty or incomplete
- Wait for the run to complete before checking logs
- Ensure the step executed (not skipped)
- Check if there were any execution errors
- Re-run the test to regenerate logs
Trace not loading
- Verify the test run completed
- Check browser console for errors
- Ensure trace data was captured
- Try refreshing the page
Run Single Step not working
- Ensure live preview is active (green pulse indicator)
- Check if browser session is connected
- Use Run All to start a fresh browser session if needed
- Verify the step is not disabled or read-only
Related
- Execution: Understanding test execution and run lifecycle
- Editor: Using the editor for building and debugging tests
- Test Management: Managing test cases and reviewing runs
- Browser State Storage: Maintaining state between runs

