Skip to main content

Overview

Parallel execution allows you to run multiple tests at the same time, significantly reducing total test suite duration. Instead of running tests sequentially, Supatest distributes them across multiple browser sessions.

How It Works

Sequential vs Parallel

Sequential execution:
Test 1 (2min) → Test 2 (3min) → Test 3 (2min) → Test 4 (3min)
Total time: 10 minutes
Parallel execution (2 workers):
Worker 1: Test 1 (2min) → Test 3 (2min)
Worker 2: Test 2 (3min) → Test 4 (3min)
Total time: 6 minutes

Parallel Workers

Each worker runs in an isolated browser session:
  • Independent browser instances
  • Separate cookies and storage
  • No state sharing between workers
  • Tests can run in any order

Configuring Parallel Execution

Organization Settings

Set your parallel execution limit:
  1. Go to Settings > Organization
  2. Find Parallel Execution section
  3. Set maximum parallel workers
  4. Save changes

Plan-Level Configuration

Configure parallelism per test plan:
  1. Open your Test Plan
  2. Go to Settings
  3. Set Parallel Workers count
  4. Save the plan

Available Limits

Parallel limits depend on your plan:
PlanMax Parallel Workers
Free1 (sequential only)
Pro3
Team5
EnterpriseCustom

Best Practices

Test Independence

Ensure tests don’t depend on each other: Good (independent):
Test A: Login → View profile
Test B: Login → Update settings
Test C: Login → Check notifications
Each test handles its own setup. Bad (dependent):
Test A: Create user
Test B: Login as created user  ← Depends on Test A
Test C: Delete user            ← Depends on Test A
Tests rely on shared state.

Data Isolation

Prevent tests from conflicting:
  • Use unique test data per test
  • Generate random identifiers
  • Clean up test data after runs
  • Use separate test accounts
Example - unique data:
Test 1: Creates [email protected]
Test 2: Creates [email protected]
Test 3: Creates [email protected]

Resource Considerations

Account for parallel load:
  • Database connections multiply
  • API rate limits may apply
  • Server resources shared
  • Test environments must handle load

Optimizing Parallel Runs

Balance Test Duration

Distribute tests evenly: Unbalanced:
Worker 1: Test A (10min)
Worker 2: Test B (1min), Test C (1min), Test D (1min)
Total: 10 minutes (Worker 1 is bottleneck)
Balanced:
Worker 1: Test A (5min), Test B (5min)
Worker 2: Test C (5min), Test D (5min)
Total: 10 minutes (both workers finish together)
Organize tests strategically:
  • Group by feature area
  • Consider shared setup needs
  • Balance workload across groups

Optimize Individual Tests

Faster tests improve parallel efficiency:
  • Remove unnecessary waits
  • Use efficient locators
  • Minimize navigation steps
  • Reuse browser state where possible

Monitoring Parallel Runs

Viewing Parallel Progress

Monitor running tests:
  1. Open the Test Plan run
  2. See all tests running in parallel
  3. Track individual test progress
  4. View overall completion status

Understanding Results

Results show parallel context:
FieldDescription
Worker IDWhich parallel worker ran the test
Start TimeWhen this test started
DurationIndividual test duration
Total DurationFull parallel run duration

Analyzing Performance

Track parallel efficiency:
  • Compare sequential vs parallel times
  • Identify bottleneck tests
  • Monitor resource utilization
  • Optimize test distribution

Common Patterns

Smoke Tests (High Parallelism)

Quick, independent tests benefit from maximum parallelism:
Test Plan: Smoke Tests
Parallel Workers: 5
Tests:
  - Homepage loads (30s)
  - Login works (45s)
  - Search functions (40s)
  - Cart accessible (35s)
  - Checkout visible (30s)
Total: ~45 seconds with 5 workers

Regression Suite (Moderate Parallelism)

Longer tests with some shared resources:
Test Plan: Regression
Parallel Workers: 3
Tests:
  - Full checkout flow (5min)
  - User registration (3min)
  - Product search (2min)
  - Account settings (2min)
  - Order history (3min)
Total: ~8 minutes with 3 workers

Integration Tests (Limited Parallelism)

Tests with external dependencies:
Test Plan: Integrations
Parallel Workers: 2
Tests:
  - Payment processing
  - Email verification
  - SMS confirmation
Reason: Limited external API rate limits

Troubleshooting

Tests Failing in Parallel

If tests pass sequentially but fail in parallel:
  1. Check for shared state: Tests may be modifying same data
  2. Review test data: Ensure unique identifiers
  3. Check race conditions: Tests competing for resources
  4. Verify cleanup: Previous test data affecting current test

Inconsistent Results

If results vary between runs:
  1. Identify flaky tests: Mark and investigate
  2. Check timing dependencies: Add proper waits
  3. Review external services: May have rate limits
  4. Isolate test data: Prevent conflicts

Performance Not Improving

If parallel runs aren’t faster:
  1. Check worker limit: May be hitting plan limit
  2. Find bottleneck tests: Long tests slow overall time
  3. Review resource limits: Infrastructure may be saturated
  4. Consider test count: Few tests may not benefit from parallelism

Resource Exhaustion

If infrastructure struggles:
  1. Reduce parallel workers: Lower concurrent load
  2. Add rate limiting: Slow down test interactions
  3. Scale infrastructure: Increase capacity
  4. Stagger test starts: Don’t start all at once

Best Practices Summary

PracticeDescription
Independent testsNo test relies on another
Unique test dataEach test uses own data
Balanced durationDistribute work evenly
Clean up afterRemove test artifacts
Monitor resourcesWatch infrastructure load
Start conservativeIncrease parallelism gradually