Overview

Test Cases are automated validation scripts that execute specific user workflows and verify expected application behavior. Each test case consists of a sequence of programmatic actions (clicks, inputs, navigations) combined with assertions that validate application responses and state changes.

Test cases operate within the Supatest execution engine, utilizing AI-powered element detection and auto-healing capabilities to maintain test stability across application changes. They support parameterization through environment variables, enabling execution across multiple environments and datasets.

Creating Test Cases

Test cases can be created through multiple approaches:

  • Manual Creation: Build tests using the visual test editor interface
  • Browser Recording: Capture interactions using the Supatest recorder extension
  • AI Generation: Generate tests from natural language goals

To create a new test case:

  1. Navigate to your test suite
  2. Select Add New Test Case
  3. Configure the test parameters:
    • Title: Use [Feature]_[Scenario]_[ExpectedResult] naming convention
    • Tags: Tags for filtering testcases
    • Timeout: Set maximum step execution duration
    • Auto-healing: Enable AI-powered selector maintenance
    • Starting Point: Specify initial URL or prerequisite snippet

Test Case Interface

The test case interface provides comprehensive management through specialized tabs for different aspects of test development and analysis.

Steps Tab

Primary workspace for test construction, displaying sequential actions with parameter configuration and execution controls.

Screenshots Tab

Visual documentation grid showing execution screenshots with step correlation and PDF export capabilities.

Video Playback

Complete session replay functionality for visual verification and failure analysis during test execution.

Run History

Execution analytics tracking success rates, performance trends, and historical comparison data.

Logs

Multi-layered execution information including technical logs and AI-enhanced readable formats for debugging.

Best Practices

Test Architecture

Naming Conventions Follow the structured format for consistent identification:

[Feature]_[Scenario]_[ExpectedResult]
Example: Authentication_InvalidCredentials_DisplaysErrorMessage

Test Structure Design

  1. Implement logical step grouping for maintainability
  2. Optimize step count for execution efficiency
  3. Design clear progression through application workflows
  4. Create modular components for reusability across tests

State Management Ensure tests are self-contained with proper initialization and cleanup:

  1. Implement automated state reset mechanisms
  2. Prevent test interdependencies
  3. Maintain execution reliability regardless of order
  4. Design proper resource isolation

Data and Environment Management

Environment Variables Utilize parameterization for dynamic configuration:

  1. Handle sensitive data through encrypted variables
  2. Configure environment-specific settings
  3. Enable execution across multiple testing environments
  4. Implement secure credential management

Test Independence Design autonomous test execution:

  1. Create self-contained test cases
  2. Implement automated cleanup procedures
  3. Avoid dependencies on external test state
  4. Ensure consistent execution across environments

Execution Strategy

Element Selection Leverage Supatest’s AI-powered capabilities:

  1. Implement stable selector strategies
  2. Utilize auto-healing for long-term maintenance
  3. Design fallback selector mechanisms
  4. Adapt to application changes automatically

Performance Optimization Configure execution parameters for optimal results:

  1. Set appropriate timeout settings
  2. Implement efficient wait strategies
  3. Balance test reliability with execution speed
  4. Avoid unnecessary delays in test flow