Overview
Test cases are the core unit of testing in Supatest. Each test case contains a sequence of steps that simulate user interactions and verify expected outcomes.
Creating Test Cases
Quick Create
- Click Create → New Test Case
- Enter a descriptive title
- Select destination folder
- Add tags (optional)
- Click Create
- Right-click on a folder
- Select New Test Case
- Test is created in that folder
By Duplication
- Right-click existing test
- Select Duplicate
- Copy is created in same folder
- Rename and modify as needed
Test Case Configuration
Basic Properties
| Property | Description | Example |
|---|
| Title | Descriptive name | ”User can add item to cart” |
| Folder | Location in Test Explorer | /E-commerce/Cart |
| Tags | Categories for filtering | smoke, cart, p0 |
Test Settings
Access via the Settings tab or gear icon:
| Setting | Options | Purpose |
|---|
| Browser | Chromium, Firefox, WebKit | Which browser to use |
| Viewport | Desktop, Tablet, Mobile, Custom | Screen size |
| Timeout | 5s - 5min | Maximum step duration |
| Auto-Healing | On/Off | AI-powered locator repair |
Test Case Structure
Steps
Tests are built from steps. Each step represents an action:
Test: User can checkout
├── Step 1: Navigate to /products
├── Step 2: Click "Add to Cart" button
├── Step 3: Click cart icon
├── Step 4: Click "Checkout"
├── Step 5: Fill shipping form
├── Step 6: Click "Place Order"
└── Step 7: Verify "Order Confirmed" text
Step Types
| Category | Examples |
|---|
| Interactions | Click, Fill, Select, Hover |
| Navigation | Navigate, Go Back, Reload |
| Verification | Verify Text, Verify Visibility, Visual Assertion |
| Utilities | Wait, Extract Value, Upload File |
| Technical | API Request, Run Python, Check Email |
See Steps documentation for complete reference.
Managing Test Cases
Edit
- Click on test in Test Explorer
- Opens in Editor
- Add, modify, or remove steps
- Changes auto-save
Rename
- Click test title in Editor
- Edit the name
- Press Enter to save
Or:
- Right-click in Test Explorer
- Select Rename
Move
Drag and Drop:
- Click and hold the test
- Drag to destination folder
- Release to drop
Delete
- Right-click the test
- Select Delete
- Confirm deletion
Deleted tests cannot be recovered. Export important tests before deleting.
Convert to Snippet
- Open test in Editor
- Select steps to convert
- Click Save as Snippet
- Name the snippet
- Steps become reusable
Test Case Details
Primary Tabs
When you open a test case, you’ll see several primary tabs:
| Tab | Content |
|---|
| Steps | Build and edit the test in the visual editor |
| Activity | Recent changes, runs, and healing events |
| Last Run Report | All assets and data from the most recent test run |
| Runs | Execution history with analytics and detailed run information |
| Code | Generated test code for the current steps |
Last Run Report
The Last Run Report tab consolidates all execution assets from the most recent test run. This tab contains sub-tabs for different types of run data:
| Sub-tab | Content |
|---|
| Video | Full video recording of the test execution |
| Screenshots | Step-by-step screenshots captured during execution |
| Trace | Detailed execution timeline with actions, waits, and network requests |
| Logs | Test execution logs (both AI-generated summaries and raw technical logs) |
| AI Execution | AI execution logs showing agent decisions, AI assertions, and auto-healing actions |
| Inbox | Emails and TOTP codes captured during the test run |
The Last Run Report provides a comprehensive view of a single test execution, making it easy to debug failures or verify successful runs without navigating to individual run details.
Note: The Last Run Report shows data from the most recent run. To view assets from specific historical runs, use the Runs tab and select a particular run.
Runs Tab
The Runs tab provides comprehensive analytics and execution history for the test case:
- Run History: Complete list of all test executions with status, duration, and timestamps
- Analytics: Visual charts and metrics showing:
- Pass/fail trends over time
- Average execution duration
- Flakiness indicators
- Success rate percentages
- Recent run patterns
- Run Details: Click any run to view detailed information including all execution assets (video, screenshots, trace, logs, AI execution logs, and inbox items)
Use the Runs tab to:
- Track test reliability over time
- Identify flaky tests
- Analyze performance trends
- Access detailed information for any historical run
Each test tracks:
- Created date
- Last modified
- Last run status
- Run count
- Average duration
Naming Best Practices
Good Names
Action-oriented, specific, outcome-focused:
- “User can reset password via email”
- “Search filters products by category”
- “Guest checkout completes successfully”
Poor Names
Vague, technical, or ambiguous:
- “Test 1”
- “Login test”
- “TC_AUTH_001”
Naming Pattern
[Actor] can [action] [context/condition]
Examples:
- “User can login with valid credentials”
- “Admin can delete user account”
- “Guest can browse products without login”
- Open test settings
- Click Tags field
- Type tag name
- Press Enter to add
Tag Strategies
By Priority:
p0 - Critical, run on every deploy
p1 - High, run daily
p2 - Medium, run weekly
By Type:
smoke - Quick health checks
regression - Full coverage
e2e - End-to-end flows
By Feature:
auth - Authentication
payments - Payment processing
search - Search functionality
Importing Tests
From File
- Click Create → Import Test Case
- Select file (JSON format)
- Choose destination folder
- Review and confirm
From Clipboard
- Copy test JSON
- Use import function
- Paste content
- Import
Exporting Tests
- Right-click test
- Select Export
- Download JSON file
Export includes:
- All steps and configurations
- Tags and metadata
- Environment variable references (not values)
Test Case Lifecycle
Create → Build Steps → Run → Debug → Refine → Maintain
↓ ↓
[Editor] [Debugging]
↓ ↓
[Recorder] [Run History]
↓
[AI Generation]
Troubleshooting
Test Won’t Run
- Check browser selection
- Verify environment is configured
- Ensure steps are valid
Steps Failing
- Review screenshots and logs
- Check locators are correct
- Verify timing with waits
- Check environment variables
Test Is Flaky
- Add explicit waits
- Use more stable locators
- Check for race conditions
- Review parallel execution impact