Skip to main content

Overview

Supatest provides multiple ways to run your tests depending on your needs. Whether you’re debugging in the editor, running scheduled test plans, or triggering from CI/CD, each run mode serves a specific purpose.

Run Modes

Manual Runs

Start tests directly from the dashboard or test list. How to use:
  1. Navigate to your test case
  2. Click the Run button
  3. Select environment if prompted
  4. Test executes immediately
Best for:
  • Quick validation after changes
  • One-off test execution
  • Verifying fixes before committing

Editor Runs

Execute tests while editing in the test editor with granular control. Run options:
OptionDescriptionUse Case
Run AllExecute all steps from startFull test validation
Run Single StepExecute only one stepTesting a specific step
Run Until HereRun from current step to endContinue from a point
Run From StartFresh browser, all stepsClean state validation
How to use:
  1. Open a test in the editor
  2. Use the run controls in the toolbar:
    • Run All: Click the play button
    • Run Single Step: Hover over step, click play icon
    • Run Until Here: Hover over step, click fast-forward icon
Best for:
  • Debugging failing steps
  • Iterative test development
  • Testing step modifications

Scheduled Runs

Automatic test execution based on configured schedules via Test Plans. Schedule types:
  • Hourly: Run every N hours
  • Daily: Run at specific time each day
  • Weekly: Run on specific days of the week
  • Monthly: Run on specific days of the month
  • Custom: Complex schedules using cron expressions
How to set up:
  1. Create or open a Test Plan
  2. Add tests to the plan
  3. Configure schedule settings
  4. Enable the schedule
  5. Tests run automatically
Best for:
  • Regression testing
  • Continuous monitoring
  • Overnight test suites
Learn more about Test Plans →

API-Triggered Runs

Start tests programmatically via the Supatest API. Common triggers:
  • CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins)
  • Deployment hooks
  • Custom automation scripts
How to use:
  1. Generate an API key in Settings
  2. Use the test plan trigger endpoint
  3. Pass environment and configuration
Example:
curl -X POST https://api.supatest.ai/v1/test-plan-runs/{planId}/trigger \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"environment": "staging"}'
Best for:
  • CI/CD integration
  • Post-deployment verification
  • Automated workflows
Learn more about CI Integration →

Choosing the Right Run Mode

ScenarioRecommended Mode
Developing new testEditor runs
Debugging failureEditor (Run Single Step)
Quick validationManual run
Nightly regressionScheduled run
Post-deploy checkAPI-triggered
Continuous monitoringScheduled run

Run Configuration

Environment Selection

Choose which environment to use for the run:
  • Default: Uses test’s default environment
  • Select: Choose from available environments
  • Override: Pass environment via API

Browser Settings

Configure browser for the run:
SettingOptions
BrowserChromium, Firefox, WebKit
HeadlessYes / No
ViewportDesktop, Tablet, Mobile, Custom

Timeout Settings

Adjust timeouts per run:
  • Step timeout: Max time per step (default: 10s)
  • Test timeout: Max total duration (default: 5min)

Run States

Each run progresses through states:
Queued → Running → Passed/Failed/Cancelled/Timed Out
StateDescription
QueuedWaiting for available runner
RunningCurrently executing
PassedAll steps completed successfully
FailedOne or more steps failed
CancelledManually stopped
Timed OutExceeded maximum duration

Cancelling Runs

Stop a running test: From Dashboard:
  1. Find the running test
  2. Click Cancel or stop icon
  3. Run stops at current step
From Editor:
  1. Click the stop button in toolbar
  2. Execution halts immediately
Note: Cancelled runs don’t generate complete artifacts.

Run Artifacts

Each run generates artifacts for debugging:
ArtifactDescriptionAvailable
VideoFull execution recordingAfter completion
ScreenshotsPer-step imagesDuring/after run
TraceDetailed timelineAfter completion
Raw LogsTechnical outputStreaming
AI LogsSummarized insightsAfter generation

Best Practices

During Development

  1. Use Run Single Step to test changes quickly
  2. Use Run Until Here to validate flow continuation
  3. Use Run All for final validation

For Debugging

  1. Start with Run Single Step on failing step
  2. Check if issue is isolated or depends on prior state
  3. Use Run From Start if state-dependent

For CI/CD

  1. Use API triggers for automation
  2. Set appropriate timeouts for CI environment
  3. Use headless mode for faster execution
  4. Configure failure notifications

For Monitoring

  1. Set up scheduled runs at appropriate intervals
  2. Use Test Plans to group related tests
  3. Configure alerts for failures
  4. Review trends in run history

Troubleshooting

Run Stuck in Queued

  • Check runner availability
  • Verify parallel execution limits
  • Contact support if persists

Run Taking Too Long

  • Check for unnecessary waits
  • Review network calls in trace
  • Optimize step timeouts
  • Consider parallel execution

Cannot Start Run

  • Verify test is not already running
  • Check browser session status
  • Ensure valid environment selected
  • Verify account has available runs