Skip to main content

Overview

The Test Planner Agent helps you generate test scenarios and executable test steps through natural conversation. Reference existing test cases and documents to create variations, edge cases, and comprehensive test coverage. Test Planner Agent overview

Primary use case: Generate test variations

The Test Planner Agent excels at generating variations of existing test cases. Tag a passing test case with @, describe the variation you need, and the agent will create new scenarios based on the original context. Example workflow:
  1. Tag an existing test case (e.g., @Login with valid credentials)
  2. Ask: “Create variations for invalid email formats and missing password”
  3. Review generated test scenarios in Plan mode before selecting which ones to keep
  4. Select and add the tests you need
This approach ensures consistency with your existing tests while expanding coverage for edge cases, negative scenarios, and boundary conditions.

Plan mode (Scenario notes)

Generates high-level test scenarios with descriptions and expected outcomes. Use this when you want to review testing ideas before committing to implementation. Best for:
  • Brainstorming test coverage
  • Reviewing test strategy with your team
  • Creating test case outlines to implement later
Note: The agent focuses on Plan mode; it uses AI to describe test scenarios that you can manually turn into executable tests using your application’s actual selectors and structure. Test Planner modes

Conversations and context

Starting a new conversation

Click the conversation selector in the header and choose New Conversation. Each conversation maintains its own context, allowing you to work on different test areas separately. The agent automatically saves your conversation history, making it easy to continue where you left off.

Tagging for context

Provide context by tagging documents and test cases using the @ symbol: Tag test cases:
  • Type @ and search for existing test cases
  • Tag multiple tests to reference different flows
  • Agent uses test structure and steps as context
Tag documents:
  • Upload files (PDF, TXT, MD, DOCX, images) using the paperclip icon
  • Paste text directly in your prompt
  • Tag uploaded documents with @ to reference them
Example prompt with tags:
@User Registration Test @API Documentation
Create negative test variations for missing required fields
and invalid data formats
The agent uses tagged items to understand your application’s behavior, terminology, and existing patterns.

Voice mode

Click the Speak button (or use the microphone icon) to describe your test requirements using voice. The agent transcribes your speech in real-time and adds it to the prompt. Voice tips:
  • Speak clearly and pause between thoughts
  • Voice mode automatically stops after a timeout
  • Edit the transcribed text before submitting if needed
  • Use voice to quickly capture test ideas during reviews
Click Stop to end voice recording manually.

Working with generated tests

Reviewing scenarios

Generated scenarios include:
  • Title: Clear description of the test case
  • Scenario notes: Expected behavior and validation points
  • Selection checkbox: Choose which tests to add
Test Planner results

Adding tests to your project

  1. Review the generated test cases
  2. Select the tests you want to keep (checkbox)
  3. Choose the target folder from the dropdown
  4. Click Add Selected Tests
The agent prevents duplicate additions—if you try to add a test that already exists, you’ll see a warning.

Refining generated scenarios

For tests born from Plan mode scenario notes:
  1. Add the test to your project so you can edit it in the test editor
  2. Translate the scenario notes into concrete steps that use your application’s selectors
  3. Update locators, assertions, and timing to match the real UI
  4. Run and verify the test before treating it as passing

Practical examples

Generate variations of a positive test

Prompt:
@Successful Checkout Flow
Create test variations for:
- Empty shopping cart
- Payment method failures
- Expired promo codes

Explore edge cases

Prompt:
@User Profile Update
What edge cases should I test for profile updates?
Generate scenarios for boundary values and invalid inputs.

Create tests for a new feature

Prompt:
I need to test a new dark mode toggle in user settings.
The toggle should persist across sessions.
Generate test scenarios.

Best practices

  • Tag relevant context: Always tag related test cases or documents to improve accuracy
  • Be specific: Clear, detailed prompts produce better results
  • Start with Plan mode: Review scenarios before generating executable steps
  • Iterate on results: Ask follow-up questions to refine generated tests
  • Review before adding: Check generated tests align with your requirements
  • Update locators: Always verify and update selectors before running generated tests

Troubleshooting

  • Generic results: Add more context by tagging existing tests or uploading documentation
  • Missing test details: Provide more specific requirements in your prompt
  • Can’t find test to tag: Use the search in the @ mention dropdown to filter tests
  • Voice not working: Check browser microphone permissions
  • Tests already exist: The agent prevents duplicates; rename or modify the existing test