Testing Prompts
Introduction
Testing prompts are specialized instructions designed to guide AI systems in generating high-quality test code that verifies functionality, catches edge cases, and ensures code reliability. This collection provides templates and patterns for creating comprehensive test suites across different testing paradigms and technologies.
Core Testing Principles
When crafting testing-focused prompts, ensure they embody these fundamental principles:
Comprehensive Coverage: Request tests that cover all functionality, edge cases, and error scenarios
Isolation: Emphasise tests that are independent and don't rely on external state
Readability: Prioritise clear, self-documenting test code that explains what's being tested and why
Maintainability: Generate tests that are resilient to implementation changes while still verifying behaviour
Automation-Ready: Create tests designed to run in automated pipelines without manual intervention
Prompt Templates
General Testing Template
SITUATION: Developing tests for [describe component/function] implemented in [language/framework]
CHALLENGE: Create a comprehensive test suite that verifies functionality, handles edge cases, and confirms error scenarios
AUDIENCE: Developers who need to maintain and extend this code
FORMAT:
- Follow [testing framework] conventions and best practices
- Use descriptive test names that explain behavior being tested
- Organize tests logically by functionality
- Include setup and teardown where appropriate
- Implement proper test isolation
FOUNDATIONS:
- Test happy path scenarios thoroughly
- Include edge case testing
- Verify error handling behavior
- Implement appropriate mocking/stubbing
- Add performance considerations where relevant
- Consider security implications in tests
SPECIFIC REQUIREMENTS:
- Achieve at least 90% code coverage
- Include documentation about test approach
- Implement parameterized tests for related scenarios
- Verify all public interfaces and behaviorsUnit Testing Template
Integration Testing Template
API Testing Template
Frontend Testing Template
Testing Paradigm-Specific Prompts
Test-Driven Development (TDD) Prompts
Behavior-Driven Development (BDD) Prompts
Property-Based Testing Prompts
Test Type-Specific Prompts
Performance Testing
Security Testing
Accessibility Testing
Test Mocking and Isolation Prompts
Mock/Stub Implementation
Test Data Generation
Best Practices for Testing Prompts
Request Test Documentation
Always explicitly request documentation in test prompts:
Specify Coverage Requirements
Be explicit about expected test coverage:
Request Maintainability Considerations
Encourage creation of maintainable tests:
Evaluating Test Results
When evaluating AI-generated tests, consider these questions:
Completeness: Do the tests cover all functionality, edge cases, and error scenarios?
Isolation: Are tests independent and free from interference between test cases?
Readability: Are tests clearly written and self-documenting?
Maintainability: Will tests remain valid through reasonable implementation changes?
Performance: Do tests run efficiently and avoid unnecessary operations?
Value: Do tests provide meaningful verification rather than just improving coverage metrics?
Example: Before and After
Before: Basic Test
After: Comprehensive Test Suite
Testing Anti-Patterns to Avoid
When reviewing AI-generated tests, watch for these common issues:
Brittle Tests: Tests that break with minor implementation changes
Non-Isolated Tests: Tests that depend on or affect other tests
Testing Implementation Details: Focusing on how something works rather than what it does
Incomplete Assertions: Not verifying all relevant outcomes
Excessive Mocking: Mocking too much, reducing test value
Unclear Test Purpose: Tests without clear documentation of what they're verifying
Duplicate Test Logic: Repeated test setup and assertions without proper abstraction
Include specific requirements to avoid these in your prompts.
Conclusion
Testing-focused prompts are essential for generating comprehensive test suites that ensure code quality and reliability. By explicitly guiding AI tools to implement proper testing patterns, you can build a robust verification system for your applications.
Remember that effective tests serve multiple purposes: they verify current functionality, document expected behavior, and protect against future regressions. By investing in testing from the start through well-crafted prompts, you build more reliable and maintainable software.
Last updated