Skip to main content

Overview

Test cases are the core unit of testing in Supatest. Each test case contains a sequence of steps that simulate user interactions and verify expected outcomes.

Creating Test Cases

Quick Create

  1. Click CreateNew Test Case
  2. Enter a descriptive title
  3. Select destination folder
  4. Add tags (optional)
  5. Click Create

From Context Menu

  1. Right-click on a folder
  2. Select New Test Case
  3. Test is created in that folder

By Duplication

  1. Right-click existing test
  2. Select Duplicate
  3. Copy is created in same folder
  4. Rename and modify as needed

Test Case Configuration

Basic Properties

PropertyDescriptionExample
TitleDescriptive name”User can add item to cart”
FolderLocation in Test Explorer/E-commerce/Cart
TagsCategories for filteringsmoke, cart, p0

Test Settings

Access via the Settings tab or gear icon:
SettingOptionsPurpose
BrowserChromium, Firefox, WebKitWhich browser to use
ViewportDesktop, Tablet, Mobile, CustomScreen size
Timeout5s - 5minMaximum step duration
Auto-HealingOn/OffAI-powered locator repair

Test Case Structure

Steps

Tests are built from steps. Each step represents an action:
Test: User can checkout
├── Step 1: Navigate to /products
├── Step 2: Click "Add to Cart" button
├── Step 3: Click cart icon
├── Step 4: Click "Checkout"
├── Step 5: Fill shipping form
├── Step 6: Click "Place Order"
└── Step 7: Verify "Order Confirmed" text

Step Types

CategoryExamples
InteractionsClick, Fill, Select, Hover
NavigationNavigate, Go Back, Reload
VerificationVerify Text, Verify Visibility, Visual Assertion
UtilitiesWait, Extract Value, Upload File
TechnicalAPI Request, Run Python, Check Email
See Steps documentation for complete reference.

Managing Test Cases

Edit

  1. Click on test in Test Explorer
  2. Opens in Editor
  3. Add, modify, or remove steps
  4. Changes auto-save

Rename

  1. Click test title in Editor
  2. Edit the name
  3. Press Enter to save
Or:
  1. Right-click in Test Explorer
  2. Select Rename

Move

Drag and Drop:
  1. Click and hold the test
  2. Drag to destination folder
  3. Release to drop

Delete

  1. Right-click the test
  2. Select Delete
  3. Confirm deletion
Deleted tests cannot be recovered. Export important tests before deleting.

Convert to Snippet

  1. Open test in Editor
  2. Select steps to convert
  3. Click Save as Snippet
  4. Name the snippet
  5. Steps become reusable

Test Case Details

Primary Tabs

When you open a test case, you’ll see several primary tabs:
TabContent
StepsBuild and edit the test in the visual editor
ActivityRecent changes, runs, and healing events
Last Run ReportAll assets and data from the most recent test run
RunsExecution history with analytics and detailed run information
CodeGenerated test code for the current steps

Last Run Report

The Last Run Report tab consolidates all execution assets from the most recent test run. This tab contains sub-tabs for different types of run data:
Sub-tabContent
VideoFull video recording of the test execution
ScreenshotsStep-by-step screenshots captured during execution
TraceDetailed execution timeline with actions, waits, and network requests
LogsTest execution logs (both AI-generated summaries and raw technical logs)
AI ExecutionAI execution logs showing agent decisions, AI assertions, and auto-healing actions
InboxEmails and TOTP codes captured during the test run
The Last Run Report provides a comprehensive view of a single test execution, making it easy to debug failures or verify successful runs without navigating to individual run details.
Note: The Last Run Report shows data from the most recent run. To view assets from specific historical runs, use the Runs tab and select a particular run.

Runs Tab

The Runs tab provides comprehensive analytics and execution history for the test case:
  • Run History: Complete list of all test executions with status, duration, and timestamps
  • Analytics: Visual charts and metrics showing:
    • Pass/fail trends over time
    • Average execution duration
    • Flakiness indicators
    • Success rate percentages
    • Recent run patterns
  • Run Details: Click any run to view detailed information including all execution assets (video, screenshots, trace, logs, AI execution logs, and inbox items)
Use the Runs tab to:
  • Track test reliability over time
  • Identify flaky tests
  • Analyze performance trends
  • Access detailed information for any historical run

Metadata

Each test tracks:
  • Created date
  • Last modified
  • Last run status
  • Run count
  • Average duration

Naming Best Practices

Good Names

Action-oriented, specific, outcome-focused:
  • “User can reset password via email”
  • “Search filters products by category”
  • “Guest checkout completes successfully”

Poor Names

Vague, technical, or ambiguous:
  • “Test 1”
  • “Login test”
  • “TC_AUTH_001”

Naming Pattern

[Actor] can [action] [context/condition]
Examples:
  • “User can login with valid credentials”
  • “Admin can delete user account”
  • “Guest can browse products without login”

Tags for Test Cases

Adding Tags

  1. Open test settings
  2. Click Tags field
  3. Type tag name
  4. Press Enter to add

Tag Strategies

By Priority:
  • p0 - Critical, run on every deploy
  • p1 - High, run daily
  • p2 - Medium, run weekly
By Type:
  • smoke - Quick health checks
  • regression - Full coverage
  • e2e - End-to-end flows
By Feature:
  • auth - Authentication
  • payments - Payment processing
  • search - Search functionality

Importing Tests

From File

  1. Click CreateImport Test Case
  2. Select file (JSON format)
  3. Choose destination folder
  4. Review and confirm

From Clipboard

  1. Copy test JSON
  2. Use import function
  3. Paste content
  4. Import

Exporting Tests

  1. Right-click test
  2. Select Export
  3. Download JSON file
Export includes:
  • All steps and configurations
  • Tags and metadata
  • Environment variable references (not values)

Test Case Lifecycle

Create → Build Steps → Run → Debug → Refine → Maintain
           ↓                    ↓
        [Editor]           [Debugging]
           ↓                    ↓
        [Recorder]        [Run History]

        [AI Generation]

Troubleshooting

Test Won’t Run

  • Check browser selection
  • Verify environment is configured
  • Ensure steps are valid

Steps Failing

  • Review screenshots and logs
  • Check locators are correct
  • Verify timing with waits
  • Check environment variables

Test Is Flaky

  • Add explicit waits
  • Use more stable locators
  • Check for race conditions
  • Review parallel execution impact