Testing prompts are specialized instructions designed to guide AI systems in generating high-quality test code that verifies functionality, catches edge cases, and ensures code reliability. This collection provides templates and patterns for creating comprehensive test suites across different testing paradigms and technologies.
Core Testing Principles
When crafting testing-focused prompts, ensure they embody these fundamental principles:
Comprehensive Coverage: Request tests that cover all functionality, edge cases, and error scenarios
Isolation: Emphasise tests that are independent and don't rely on external state
Readability: Prioritise clear, self-documenting test code that explains what's being tested and why
Maintainability: Generate tests that are resilient to implementation changes while still verifying behaviour
Automation-Ready: Create tests designed to run in automated pipelines without manual intervention
Prompt Templates
General Testing Template
SITUATION: Developing tests for [describe component/function] implemented in [language/framework]
CHALLENGE: Create a comprehensive test suite that verifies functionality, handles edge cases, and confirms error scenarios
AUDIENCE: Developers who need to maintain and extend this code
FORMAT:
- Follow [testing framework] conventions and best practices
- Use descriptive test names that explain behavior being tested
- Organize tests logically by functionality
- Include setup and teardown where appropriate
- Implement proper test isolation
FOUNDATIONS:
- Test happy path scenarios thoroughly
- Include edge case testing
- Verify error handling behavior
- Implement appropriate mocking/stubbing
- Add performance considerations where relevant
- Consider security implications in tests
SPECIFIC REQUIREMENTS:
- Achieve at least 90% code coverage
- Include documentation about test approach
- Implement parameterized tests for related scenarios
- Verify all public interfaces and behaviors
Unit Testing Template
SITUATION: Writing unit tests for [describe function/class/module] in [language]
CHALLENGE: Create isolated unit tests that verify individual components work correctly
AUDIENCE: Development team using [testing framework]
FORMAT:
- Follow AAA pattern (Arrange, Act, Assert)
- Use descriptive test method names that explain the scenario and expected outcome
- Group related tests in appropriate test classes/suites
- Include clear assertions with meaningful messages
- Use proper test isolation techniques
FOUNDATIONS:
- Test each method/function separately
- Mock external dependencies
- Verify both success paths and failure modes
- Test boundary conditions and edge cases
- Ensure tests are deterministic and repeatable
- Cover all branches and conditions
SPECIFIC REQUIREMENTS:
- Create at least one test per method/function
- Test all error handling paths
- Include parameterized tests for boundary conditions
- Verify expected exceptions are thrown
- Add comments explaining complex test scenarios
Integration Testing Template
SITUATION: Creating integration tests for [describe components] that interact with each other
CHALLENGE: Develop integration tests that verify components work correctly together
AUDIENCE: Development team using [testing framework/tools]
FORMAT:
- Structure tests to verify end-to-end workflows
- Use appropriate integration test patterns
- Include setup and teardown for test environment
- Implement clear test data management
- Document integration points being tested
FOUNDATIONS:
- Test interaction between real components (minimal mocking)
- Verify data flows correctly between components
- Test boundary behaviors at integration points
- Include error path testing across components
- Verify performance at integration level where relevant
SPECIFIC REQUIREMENTS:
- Set up appropriate test fixtures and helpers
- Implement clean test data management
- Handle asynchronous operations properly
- Include integration-specific edge cases
- Document integration test architecture
API Testing Template
SITUATION: Testing [REST/GraphQL/other] API for [describe service]
CHALLENGE: Create comprehensive API tests that verify functionality, security, and error handling
AUDIENCE: Development team working on API implementation and consumers
FORMAT:
- Implement tests for each API endpoint
- Structure tests by resource/entity
- Use appropriate API testing patterns for [REST/GraphQL/other]
- Include request/response validation
- Document API test coverage
FOUNDATIONS:
- Test all CRUD operations where applicable
- Verify proper status codes and response formats
- Include authentication/authorization tests
- Test pagination, filtering, and sorting
- Verify rate limiting and quotas if applicable
- Include performance tests for critical endpoints
SPECIFIC REQUIREMENTS:
- Test both positive scenarios and error cases
- Verify API schema/contract compliance
- Include security-focused tests (authentication, authorization)
- Test with various valid and invalid inputs
- Verify proper error responses for invalid requests
Frontend Testing Template
SITUATION: Creating tests for [describe UI component/page] built with [framework]
CHALLENGE: Develop frontend tests that verify rendering, user interactions, and state management
AUDIENCE: Frontend development team
FORMAT:
- Structure tests by component/feature
- Implement appropriate render and interaction tests
- Use proper selector strategies (test IDs, accessibility roles)
- Include visual regression tests if applicable
- Document test coverage approach
FOUNDATIONS:
- Test component rendering
- Verify user interactions (clicks, inputs, etc.)
- Test state changes and updates
- Include accessibility testing
- Verify responsive behavior if applicable
- Test error states and loading indicators
SPECIFIC REQUIREMENTS:
- Test both initial rendering and updates
- Verify proper event handling
- Include form validation testing
- Test component integration with others
- Verify proper error display and handling
Testing Paradigm-Specific Prompts
Test-Driven Development (TDD) Prompts
SITUATION: Using TDD to develop [describe feature/component]
CHALLENGE: Create tests first that define the expected behavior before implementation
AUDIENCE: Developers following TDD methodology
FORMAT:
- Start with minimal failing tests
- Use descriptive test names that define specifications
- Structure tests to guide implementation
- Include clear assertions that define expected behavior
- Build up test complexity incrementally
FOUNDATIONS:
- Begin with basic functionality tests
- Progress to edge cases
- Include error scenario tests
- Build a complete specification through tests
- Focus on behavior rather than implementation details
SPECIFIC REQUIREMENTS:
- Tests should initially fail (red phase)
- Tests should be minimal and focused
- Each test should define a single behavior
- Tests should serve as living documentation
- Include pending/skipped tests for future features
Behavior-Driven Development (BDD) Prompts
SITUATION: Implementing BDD tests for [describe feature] using [Cucumber/SpecFlow/other]
CHALLENGE: Create behavior specifications and step definitions that verify business requirements
AUDIENCE: Cross-functional team including developers, QA, and business stakeholders
FORMAT:
- Write scenarios in Given-When-Then format
- Use domain language from business requirements
- Implement corresponding step definitions
- Structure features by business capability
- Include scenario outlines for related test cases
FOUNDATIONS:
- Focus on business value and user behavior
- Keep scenarios concise and focused
- Use declarative rather than imperative style
- Include scenarios for both happy path and exceptional flows
- Maintain language consistency with domain experts
SPECIFIC REQUIREMENTS:
- Create feature files with clear descriptions
- Implement reusable step definitions
- Include scenario backgrounds for common prerequisites
- Add proper tagging for test organization
- Document any implementation notes or assumptions
Property-Based Testing Prompts
SITUATION: Implementing property-based tests for [describe function/component]
CHALLENGE: Create tests that verify properties and invariants rather than specific examples
AUDIENCE: Development team using [testing framework with property testing support]
FORMAT:
- Define properties and invariants that should always hold
- Implement property-based test generators
- Include appropriate constraints for generated inputs
- Document properties being tested
- Structure tests by logical properties
FOUNDATIONS:
- Focus on invariants and properties rather than examples
- Use appropriate generators for input data
- Include shrinking strategies for failure cases
- Test with large number of random inputs
- Verify mathematical or logical properties
SPECIFIC REQUIREMENTS:
- Define clear property descriptions
- Implement custom generators for domain-specific types
- Include seed handling for reproducibility
- Add classification for test case distribution
- Document the properties and why they matter
Test Type-Specific Prompts
Performance Testing
SITUATION: Creating performance tests for [describe system/component]
CHALLENGE: Develop tests that verify performance characteristics and detect regressions
AUDIENCE: Development team concerned with system performance
FORMAT:
- Structure as automated performance benchmarks
- Include clear performance criteria and thresholds
- Implement proper test isolation for consistent results
- Add reporting of performance metrics
- Document expected performance characteristics
FOUNDATIONS:
- Test response time/latency
- Verify throughput capabilities
- Include load testing scenarios
- Test resource utilization (CPU, memory, etc.)
- Implement scalability testing if applicable
SPECIFIC REQUIREMENTS:
- Define clear, measurable performance goals
- Include baseline performance measurements
- Implement warmup phases where appropriate
- Add statistical analysis of results
- Document environment requirements for tests
Security Testing
SITUATION: Implementing security tests for [describe component/system]
CHALLENGE: Create tests that verify security controls and detect vulnerabilities
AUDIENCE: Development team with security responsibilities
FORMAT:
- Structure by security control or vulnerability category
- Implement automated security verification
- Include clear security assertions
- Document security requirements being tested
- Add appropriate isolation for security tests
FOUNDATIONS:
- Test authentication mechanisms
- Verify authorization controls
- Include input validation/sanitization tests
- Test protection against common vulnerabilities (OWASP Top 10)
- Verify secure configuration
SPECIFIC REQUIREMENTS:
- Include positive and negative security tests
- Implement fuzz testing where appropriate
- Test security boundaries and trust zones
- Verify proper error handling for security events
- Document security assumptions and limitations
Accessibility Testing
SITUATION: Creating accessibility tests for [describe UI component/page]
CHALLENGE: Develop tests that verify compliance with accessibility standards
AUDIENCE: Development team responsible for accessible user interfaces
FORMAT:
- Structure tests by accessibility guideline categories
- Implement automated accessibility checking
- Include manual test instructions where automation is insufficient
- Document accessibility requirements being tested
- Add clear reporting of accessibility issues
FOUNDATIONS:
- Test semantic HTML structure
- Verify keyboard navigation
- Include screen reader compatibility tests
- Test color contrast and visual presentation
- Verify form accessibility
SPECIFIC REQUIREMENTS:
- Test against WCAG 2.1 AA standards
- Include tests for various assistive technologies
- Verify dynamic content accessibility
- Test focus management
- Document known accessibility limitations
Test Mocking and Isolation Prompts
Mock/Stub Implementation
SITUATION: Creating mocks/stubs for [describe dependencies] in tests for [component]
CHALLENGE: Implement effective test doubles that isolate the system under test
AUDIENCE: Developers writing unit and integration tests
FORMAT:
- Structure mocks by dependency type
- Implement appropriate mocking patterns
- Include clear documentation of mock behavior
- Add verification of mock interactions where needed
- Document mocking strategy
FOUNDATIONS:
- Implement minimal viable mocks
- Define clear mock behavior and responses
- Include verification of mock interactions when needed
- Properly isolate system under test
- Document any limitations of the mocking approach
SPECIFIC REQUIREMENTS:
- Implement different mock behaviors for various test scenarios
- Include error simulation in mocks
- Add appropriate verification of mock interactions
- Document mock setup and usage
- Consider performance implications of mocking
Test Data Generation
SITUATION: Creating test data for [describe tests/scenarios]
CHALLENGE: Generate comprehensive, realistic test data that covers various scenarios
AUDIENCE: Testing team working on [describe system]
FORMAT:
- Structure test data by entity/domain object
- Implement factory methods or builders for test data
- Include variations for different test scenarios
- Document test data patterns and usage
- Add helper methods for common test data needs
FOUNDATIONS:
- Create realistic, valid test data
- Include edge cases and boundary values
- Generate related entities where needed
- Implement invalid data for negative tests
- Consider performance of test data generation
SPECIFIC REQUIREMENTS:
- Create factory methods with sensible defaults
- Implement builders for complex test data
- Include random variations where appropriate
- Document the intent behind test data patterns
- Consider database or persistence implications
Best Practices for Testing Prompts
Request Test Documentation
Always explicitly request documentation in test prompts:
Please include:
- High-level description of testing strategy
- Explanation of test organization and structure
- Documentation of test fixtures and utilities
- Notes on test limitations or assumptions
- Guidance for extending tests for new features
Specify Coverage Requirements
Be explicit about expected test coverage:
Please ensure tests cover:
- All public methods and interfaces
- Each branch in conditional logic
- Error handling and exceptional paths
- Edge cases and boundary conditions
- Performance characteristics where relevant
Request Maintainability Considerations
Encourage creation of maintainable tests:
Please design tests with these maintainability factors:
- Tests should verify behavior, not implementation details
- Avoid brittle assertions that break with minor changes
- Create helper methods for common testing operations
- Implement proper test isolation
- Structure tests to make failures easy to diagnose
Evaluating Test Results
When evaluating AI-generated tests, consider these questions:
Completeness: Do the tests cover all functionality, edge cases, and error scenarios?
Isolation: Are tests independent and free from interference between test cases?
Readability: Are tests clearly written and self-documenting?
Maintainability: Will tests remain valid through reasonable implementation changes?
Performance: Do tests run efficiently and avoid unnecessary operations?
Value: Do tests provide meaningful verification rather than just improving coverage metrics?
/**
* User Registration Test Suite
*
* Tests the user registration functionality including:
* - Successful registration
* - Validation errors
* - Duplicate username handling
* - Password security requirements
* - Database interaction
*/
describe('User Registration', () => {
// Setup and teardown
beforeEach(() => {
// Reset database state
resetUserDatabase();
// Mock external services
mockEmailService();
});
afterEach(() => {
// Verify all mocks were called as expected
verifyAllMocks();
});
// Successful registration tests
describe('successful registration', () => {
test('should register valid user with complete information', async () => {
// Arrange
const userData = {
username: 'newuser123',
password: 'SecureP@ss123!',
email: 'test@example.com',
fullName: 'Test User'
};
// Act
const result = await registerUser(userData);
// Assert
expect(result.success).toBe(true);
expect(result.user.username).toBe(userData.username);
expect(result.user.email).toBe(userData.email);
// Verify user was saved to database
const savedUser = await getUserFromDb(result.user.id);
expect(savedUser).not.toBeNull();
// Verify password was hashed, not stored as plaintext
expect(savedUser.passwordHash).not.toBe(userData.password);
// Verify welcome email was sent
expect(mockEmailService.sendWelcomeEmail).toHaveBeenCalledWith(
userData.email,
expect.any(String)
);
});
test('should trim whitespace from username and email', async () => {
// Arrange
const userData = {
username: ' username123 ',
password: 'SecureP@ss123!',
email: ' test@example.com '
};
// Act
const result = await registerUser(userData);
// Assert
expect(result.success).toBe(true);
expect(result.user.username).toBe('username123');
expect(result.user.email).toBe('test@example.com');
});
});
// Validation error tests
describe('validation errors', () => {
test.each([
['empty username', { ...validUserData, username: '' }, 'Username is required'],
['short username', { ...validUserData, username: 'abc' }, 'Username must be at least 4 characters'],
['invalid email', { ...validUserData, email: 'notemail' }, 'Valid email is required'],
['empty password', { ...validUserData, password: '' }, 'Password is required'],
['weak password', { ...validUserData, password: '12345' }, 'Password does not meet security requirements']
])('should reject registration with %s', async (scenario, userData, expectedError) => {
// Act
const result = await registerUser(userData);
// Assert
expect(result.success).toBe(false);
expect(result.errors).toContainEqual(expect.objectContaining({
message: expectedError
}));
// Verify no user was created in database
expect(mockDb.users.create).not.toHaveBeenCalled();
// Verify no welcome email was sent
expect(mockEmailService.sendWelcomeEmail).not.toHaveBeenCalled();
});
});
// Duplicate username tests
describe('duplicate username handling', () => {
beforeEach(() => {
// Add existing user to database
addUserToDb({
username: 'existinguser',
email: 'existing@example.com',
passwordHash: 'hashedpassword'
});
});
test('should reject registration with duplicate username', async () => {
// Arrange
const userData = {
username: 'existinguser',
password: 'SecureP@ss123!',
email: 'new@example.com'
};
// Act
const result = await registerUser(userData);
// Assert
expect(result.success).toBe(false);
expect(result.errors).toContainEqual(expect.objectContaining({
field: 'username',
message: expect.stringContaining('already taken')
}));
});
test('should reject registration with duplicate username in different case', async () => {
// Arrange
const userData = {
username: 'ExistingUser',
password: 'SecureP@ss123!',
email: 'new@example.com'
};
// Act
const result = await registerUser(userData);
// Assert
expect(result.success).toBe(false);
expect(result.errors).toContainEqual(expect.objectContaining({
field: 'username',
message: expect.stringContaining('already taken')
}));
});
});
// Error handling tests
describe('error handling', () => {
test('should handle database connection errors gracefully', async () => {
// Arrange
mockDb.users.create.mockRejectedValueOnce(new Error('Database connection failed'));
// Act
const result = await registerUser(validUserData);
// Assert
expect(result.success).toBe(false);
expect(result.errors).toContainEqual(expect.objectContaining({
message: expect.stringContaining('system error')
}));
// Verify error was logged
expect(mockLogger.error).toHaveBeenCalled();
});
});
// Performance tests
describe('performance', () => {
test('should register user within acceptable time', async () => {
// Act
const startTime = performance.now();
await registerUser(validUserData);
const endTime = performance.now();
// Assert
expect(endTime - startTime).toBeLessThan(100); // Registration should take less than 100ms
});
});
});
Testing Anti-Patterns to Avoid
When reviewing AI-generated tests, watch for these common issues:
Brittle Tests: Tests that break with minor implementation changes
Non-Isolated Tests: Tests that depend on or affect other tests
Testing Implementation Details: Focusing on how something works rather than what it does
Incomplete Assertions: Not verifying all relevant outcomes
Excessive Mocking: Mocking too much, reducing test value
Unclear Test Purpose: Tests without clear documentation of what they're verifying
Duplicate Test Logic: Repeated test setup and assertions without proper abstraction
Include specific requirements to avoid these in your prompts.
Conclusion
Testing-focused prompts are essential for generating comprehensive test suites that ensure code quality and reliability. By explicitly guiding AI tools to implement proper testing patterns, you can build a robust verification system for your applications.
Remember that effective tests serve multiple purposes: they verify current functionality, document expected behavior, and protect against future regressions. By investing in testing from the start through well-crafted prompts, you build more reliable and maintainable software.