Skip to main content

Overview

Manual test creation gives you complete control over your test cases. The workflow starts with creating a test case, adding scenario notes to outline what you want to test, then using the Editor with its Visual Step Editor and Recorder to build your steps.

The Manual Creation Workflow

1. Create a new test case

2. Add scenario notes (outline what to test)

3. Open the Editor

4. Build steps using:
   - Visual Step Editor (add steps manually)
   - Recorder (capture interactions automatically)

5. Run and verify

Step 1: Create a Test Case

  1. Navigate to Tests in the sidebar
  2. Click CreateNew Test Case
  3. Enter a descriptive title (e.g., “User can complete checkout”)
  4. Select a folder and add tags
  5. Click Create

Step 2: Add Scenario Notes

Before building steps, outline your test flow using Scenario Notes. These are plain-text descriptions of what the test should do.

Why Use Scenario Notes?

  • Mental model: Organize your thoughts before automating
  • Documentation: Keep a human-readable description of the test
  • AI-ready: Scenario notes can be used by AI Action Agent to automate steps

Example Scenario Notes

1. Navigate to the login page
2. Enter valid email and password
3. Click the login button
4. Verify the dashboard loads
5. Check that the welcome message appears

Step 3: Build Steps in the Editor

With your scenario notes as a guide, use the Editor to create the actual test steps.

Visual Step Editor

The Visual Step Editor lets you add steps one at a time with precise control. How to use:
  1. Click + to add a new step
  2. Select the step type (Click, Fill, Navigate, etc.)
  3. Configure the step:
    • Use Element Picker to select elements from the live browser
    • Enter values, locators, and assertions
  4. Run the step to verify it works
Best for:
  • Precise locator control
  • Complex assertions and conditions
  • Adding loops, conditionals, and snippets
  • Fine-tuning recorded steps

Recorder

The Recorder captures your browser interactions and converts them to steps automatically. How to use:
  1. Click Start Recording in the Editor
  2. Interact with your application in the live browser
  3. Watch steps appear as you click, type, and navigate
  4. Click Stop Recording when done
  5. Review and refine the recorded steps
Best for:
  • Long user flows (20+ steps)
  • Exploratory testing
  • Quick prototyping
  • Complex navigation sequences

Combining Both Tools

The most effective approach uses both tools together:
  1. Start with Recorder for the main flow
    • Capture navigation, clicks, and form fills quickly
  2. Refine with Visual Editor
    • Fix locators that need more stability
    • Add assertions the recorder missed
    • Configure waits for dynamic content
    • Replace hardcoded values with expressions
  3. Add what Recorder can’t capture
    • Verification steps (assertions)
    • Conditional logic
    • Loops for repeated actions
    • Snippets for reusable flows

Example: Building a Login Test

Scenario Notes:
- Go to login page
- Enter credentials
- Submit form
- Verify dashboard access

Step 1: Record the basic flow
→ Navigate to /login
→ Fill email field
→ Fill password field
→ Click submit

Step 2: Enhance with Visual Editor
→ Add wait for dashboard element
→ Add Verify Text for welcome message
→ Replace hardcoded credentials with {{ env.TEST_EMAIL }}

Element Picker

The Element Picker is essential for both workflows.

How It Works

  1. Click the crosshair icon to activate
  2. Hover over elements in the live browser
  3. Click to capture the locator
  4. Review and adjust if needed

Locator Strategies

StrategyExampleStability
Test ID[data-testid="submit-btn"]Most stable
Rolebutton[name="Submit"]Very stable
CSS Selector.submit-buttonModerate
Text contenttext="Submit"Can break if text changes
Add data-testid attributes to your application’s key elements for the most reliable tests.

Best Practices

Start with Scenario Notes

  • Write out what you’re testing before automating
  • Keep notes simple and action-focused
  • Update notes if the test changes

Keep Tests Focused

  • One test = one user flow or feature
  • 5-15 steps is typical
  • Split long flows into multiple tests

Add Meaningful Assertions

  • Every test should verify something
  • Don’t just check that steps don’t crash
  • Verify the expected outcome occurred

Use Expressions for Flexibility

  • {{ env.BASE_URL }} for environment URLs
  • {{ env.TEST_EMAIL }} for credentials
  • {{ random.email() }} for unique data

Troubleshooting

Recorder Missing Steps

  • Some interactions may not be captured
  • Add missing steps manually with Visual Editor
  • Check if the page uses non-standard controls

Locators Breaking

  • Elements may change between sessions
  • Use Element Picker to update locators
  • Prefer data-testid over CSS classes

Steps Timing Out

  • Add explicit wait steps for dynamic content
  • Increase step timeout in settings
  • Check if element is in an iframe

Next Steps