Test Cases
Automated validation scripts for web application functionality
Overview
Test Cases are automated validation scripts that execute specific user workflows and verify expected application behavior. Each test case consists of a sequence of programmatic actions (clicks, inputs, navigations) combined with assertions that validate application responses and state changes.
Test cases operate within the Supatest execution engine, utilizing AI-powered element detection and auto-healing capabilities to maintain test stability across application changes. They support parameterization through environment variables, enabling execution across multiple environments and datasets.
Creating Test Cases
Test cases can be created through multiple approaches:
- Manual Creation: Build tests using the visual test editor interface
- Browser Recording: Capture interactions using the Supatest recorder extension
- AI Generation: Generate tests from natural language goals
To create a new test case:
- Navigate to your test suite
- Select Add New Test Case
- Configure the test parameters:
- Title: Use
[Feature]_[Scenario]_[ExpectedResult]
naming convention - Tags: Tags for filtering testcases
- Timeout: Set maximum step execution duration
- Auto-healing: Enable AI-powered selector maintenance
- Starting Point: Specify initial URL or prerequisite snippet
- Title: Use
Test Case Interface
The test case interface provides comprehensive management through specialized tabs for different aspects of test development and analysis.
Steps Tab
Primary workspace for test construction, displaying sequential actions with parameter configuration and execution controls.
Screenshots Tab
Visual documentation grid showing execution screenshots with step correlation and PDF export capabilities.
Video Playback
Complete session replay functionality for visual verification and failure analysis during test execution.
Run History
Execution analytics tracking success rates, performance trends, and historical comparison data.
Logs
Multi-layered execution information including technical logs and AI-enhanced readable formats for debugging.
Best Practices
Test Architecture
Naming Conventions Follow the structured format for consistent identification:
Test Structure Design
- Implement logical step grouping for maintainability
- Optimize step count for execution efficiency
- Design clear progression through application workflows
- Create modular components for reusability across tests
State Management Ensure tests are self-contained with proper initialization and cleanup:
- Implement automated state reset mechanisms
- Prevent test interdependencies
- Maintain execution reliability regardless of order
- Design proper resource isolation
Data and Environment Management
Environment Variables Utilize parameterization for dynamic configuration:
- Handle sensitive data through encrypted variables
- Configure environment-specific settings
- Enable execution across multiple testing environments
- Implement secure credential management
Test Independence Design autonomous test execution:
- Create self-contained test cases
- Implement automated cleanup procedures
- Avoid dependencies on external test state
- Ensure consistent execution across environments
Execution Strategy
Element Selection Leverage Supatest’s AI-powered capabilities:
- Implement stable selector strategies
- Utilize auto-healing for long-term maintenance
- Design fallback selector mechanisms
- Adapt to application changes automatically
Performance Optimization Configure execution parameters for optimal results:
- Set appropriate timeout settings
- Implement efficient wait strategies
- Balance test reliability with execution speed
- Avoid unnecessary delays in test flow
Related Documentation
- Test Editor Guide - Comprehensive guide for creating and editing test steps
- Test Recorder Guide - Browser interaction recording and test generation
- Test Execution Guide - Test execution management and analysis