Vibe Coding Framework
  • 💻Introduction
  • 🧠Getting Started
    • Guide for Project Managers
    • Guide for System Owners
  • 🫣Dunning-Kruger Effect
  • Document Organisation
  • Core Concepts
    • What is Vibe Coding
  • Benefits and Challenges
  • Framework Philosophy
  • Security Tools
  • Framework Components
    • Prompt Engineering System
    • Verification Protocols
    • Security Toolkit
    • Documentation Generator
  • Refactoring Tools
  • Team Collaboration
  • Implementation Guide
    • For Individual Developers
  • For Engineering Teams
  • For Enterprises
  • Best Practices
    • Code Review Guidelines
  • Security Checks
  • Documentation Standards
  • Collaboration Workflows
  • Case Studies
    • Success Stories
  • Lessons Learned
  • Examples
    • Enterprise Case Study: Oracle Application Modernisation
    • Local email processing system
  • Resources
    • Tools and Integrations
      • Tools and Integrations Overview
      • Local LLM Solutions
      • Prompt Management Systems
  • Learning Materials
    • Test Your knowledge - Quiz 1
    • Test your knowledge - Quiz 2
  • Community Resources
  • Document Templates
    • AI Assisted Development Policy
    • AI Prompt Library Template
    • AI-Generated Code Verification Report
    • Maintainability Prompts
    • Security-Focused Prompts
    • Testing Prompts
    • [Language/Framework]-Specific Prompts
  • Framework Evolution
    • Versioning Policy
    • Contribution Guidelines
  • Roadmap
  • Glossary of terms
  • Patreon
    • Patroen Membership
  • Contact and Social
  • CREDITS
    • Different tools were used to build this site. Thanks to:
  • The Founder
Powered by GitBook
On this page
  • Specialised Review Practices for AI-Generated Code
  • Understanding the AI Code Review Challenge
  • The C.L.E.A.R. Review Framework
  • Code Review by Component Type
  • Code Review Process Integration
  • Special Review Considerations
  • Common Review Pitfalls
  • Measuring Review Effectiveness
  • Case Study: Review Impact
  • Getting Started with Enhanced Reviews
  • Review Framework Customization
  • Next Steps
  1. Best Practices

Code Review Guidelines

Specialised Review Practices for AI-Generated Code

Code review is a critical quality control mechanism in software development, but AI-generated code presents unique review challenges that require specialised approaches. These guidelines provide structured review practices specifically designed for AI-generated components, ensuring thorough evaluation while maintaining development velocity.

Understanding the AI Code Review Challenge

Reviewing AI-generated code differs from traditional code review in several important ways:

  1. Comprehension Gap: Reviewers didn't participate in the prompt engineering process and may lack context

  2. Pattern Recognition: AI-generated code may follow unfamiliar patterns or approaches

  3. Bulk Generation: Larger volumes of code may be generated at once, creating review fatigue

  4. False Confidence: Well-formatted, professional-looking code can create a false sense of security

  5. Integrated Components: Generated code may interact with existing systems in non-obvious ways

The C.L.E.A.R. Review Framework addresses these challenges through a structured approach.

The C.L.E.A.R. Review Framework

Our specialised code review approach for AI-generated code follows the C.L.E.A.R. framework:

1. Context Establishment

Before reviewing the code itself, establish proper context:

  • Prompt Examination: Review the original prompt used to generate the code

  • Requirement Alignment: Confirm understanding of the requirements the code addresses

  • Generation History: Understand any iterations or refinements that occurred

  • System Integration: Identify how the code integrates with existing systems

Context Checklist:

## Context Checklist
- [ ] I have reviewed the original prompt
- [ ] I understand the specific requirements being addressed
- [ ] I am aware of any prompt iterations or refinements
- [ ] I understand where this code fits in the broader system
- [ ] I recognise the specific security and performance requirements

2. Layered Examination

Review the code in progressive layers rather than line-by-line:

  • Level 1: Structure and Architecture

    • Overall organisation and component structure

    • Architectural patterns and approach

    • Component interfaces and interactions

    • Error handling strategy

  • Level 2: Core Logic and Algorithms

    • Main business logic implementation

    • Algorithm correctness

    • Data transformation and processing

    • State management approach

  • Level 3: Security and Edge Cases

    • Input validation and sanitisation

    • Authentication and authorisation

    • Error handling implementation

    • Edge case management

  • Level 4: Performance and Efficiency

    • Resource usage and optimisation

    • Query efficiency

    • Caching strategies

    • Memory management

  • Level 5: Style and Maintainability

    • Coding standards compliance

    • Naming conventions

    • Documentation quality

    • Overall readability

Layered Review Template:

# Layered Code Review

## Level 1: Structure and Architecture
- Overall approach: [Notes]
- Component organisation: [Notes]
- Interface design: [Notes]
- Areas of concern: [Notes]

## Level 2: Core Logic and Algorithms
- Main algorithms: [Notes]
- Business logic implementation: [Notes]
- Data flow: [Notes]
- Areas of concern: [Notes]

## Level 3: Security and Edge Cases
- Security approach: [Notes]
- Input validation: [Notes]
- Error handling: [Notes]
- Areas of concern: [Notes]

## Level 4: Performance and Efficiency
- Performance considerations: [Notes]
- Resource usage: [Notes]
- Optimisation opportunities: [Notes]
- Areas of concern: [Notes]

## Level 5: Style and Maintainability
- Code style and readability: [Notes]
- Documentation quality: [Notes]
- Maintenance concerns: [Notes]
- Areas of concern: [Notes]

3. Explicit Verification

Actively verify understanding of complex or critical sections:

  • Verbalisation: Explain the code's operation in your own words

  • Mental Execution: Step through the logic with sample data

  • Boundary Testing: Consider behavior at edge cases

  • Failure Scenario Analysis: Examine how the code handles failures

Verification Documentation:

## Critical Component Verification

### Authentication Flow
I've traced through the authentication flow and verify that:
- Credentials are properly validated against the database
- Passwords are never stored or transmitted in plaintext
- JWT tokens are generated with appropriate expiration
- Refresh tokens are securely stored
- Failed login attempts are rate-limited appropriately

### Data Processing Pipeline
I've mentally executed the data processing pipeline and verify that:
- Input data is properly validated
- Transformations maintain data integrity
- Error handling preserves partial results when appropriate
- The process can resume after failure
- Resource cleanup occurs in all execution paths

### [Other Critical Components...]

4. Alternative Consideration

Evaluate the chosen approach against alternatives:

  • Pattern Evaluation: Consider if the chosen patterns are appropriate

  • Alternative Approaches: Identify other valid implementation approaches

  • Trade-off Assessment: Analyse the trade-offs of the chosen approach

  • Implementation Efficiency: Consider if the solution is overly complex or overly simplified

Alternative Analysis Template:

## Alternative Approach Analysis

### Current Approach
The code implements authentication using JWT tokens with database-backed validation.

### Alternative Approaches Considered
1. **Session-based authentication**
   - Pros: Simpler revocation, potentially more familiar
   - Cons: Scaling challenges, more server-side state
   - Assessment: JWT approach is more appropriate for our distributed architecture

2. **OAuth delegation**
   - Pros: Standardised, delegates security to identity provider
   - Cons: More complex implementation, external dependency
   - Assessment: Would be overkill for our current requirements

### Conclusion
The current JWT implementation is appropriate because:
- It aligns with our stateless architecture
- It provides necessary security features
- It balances complexity and functionality
- It integrates well with our existing components

5. Refactoring Recommendations

Provide specific, actionable feedback for improvement:

  • Security Enhancements: Recommend specific security improvements

  • Readability Improvements: Suggest clarifications and simplifications

  • Performance Optimizations: Identify potential performance issues

  • Maintainability Enhancements: Recommend structure or documentation improvements

Refactoring Recommendation Template:

## Refactoring Recommendations

### High Priority
1. **Add rate limiting to authentication endpoints**
   - Current implementation has no protection against brute force
   - Recommended approach: Implement token bucket algorithm
   - Specific locations: `AuthController.login()` method

2. **Fix SQL injection vulnerability**
   - Current implementation uses string concatenation for queries
   - Recommended approach: Use parameterised queries
   - Specific locations: `UserRepository.findByUsername()` method

### Medium Priority
1. **Improve error messages**
   - Current generic errors reduce debugging ability
   - Recommended approach: Add specific error codes and messages
   - Specific locations: Throughout error handling

2. **Refactor nested conditionals**
   - Complex nested logic in validation function
   - Recommended approach: Extract validations to separate functions
   - Specific locations: `InputValidator.validateRequest()` method

### Low Priority
1. **Enhance documentation**
   - Core algorithm lacks explanation
   - Recommended approach: Add detailed comments explaining the approach
   - Specific locations: `DataProcessor.transform()` method

Code Review by Component Type

Different types of AI-generated components require specialised review focus:

Authentication & Authorisation Components

  • Primary Focus: Security, compliance with standards

  • Key Questions:

    • Is authentication implemented according to current best practices?

    • Are authorisation checks comprehensive and correctly placed?

    • Is token handling secure and properly implemented?

    • Are there appropriate protections against common attacks?

    • Are all security failure paths handled properly?

Authentication Review Checklist:

## Authentication Component Review Checklist

### Authentication Implementation
- [ ] Password handling follows current best practices
- [ ] Credentials are never logged or exposed
- [ ] Authentication failures provide safe error messages
- [ ] Rate limiting or account lockout is implemented
- [ ] Session or token management is secure

### Authorisation Implementation
- [ ] Permission checks occur before protected operations
- [ ] Authorisation is enforced at all entry points
- [ ] Role-based access control is properly implemented
- [ ] Principle of least privilege is applied
- [ ] Authorisation bypass attempts are prevented

### Token Security
- [ ] Tokens have appropriate expiration
- [ ] Token validation is comprehensive
- [ ] Refresh processes are secure
- [ ] Token storage follows best practices
- [ ] Token revocation is possible

Data Access Components

  • Primary Focus: Security, query efficiency, error handling

  • Key Questions:

    • Are all database queries protected against injection?

    • Is connection management implemented correctly?

    • Are queries optimised for performance?

    • Is error handling comprehensive and secure?

    • Are transactions used appropriately?

Data Access Review Checklist:

## Data Access Component Review Checklist

### Query Security
- [ ] All queries use parameterised statements
- [ ] No dynamic SQL through string concatenation
- [ ] Input validation occurs before query execution
- [ ] Query results are sanitised before use
- [ ] Access control is enforced at the data layer

### Performance Optimisation
- [ ] Queries select only necessary fields
- [ ] Appropriate indexes are utilised
- [ ] N+1 query problems are avoided
- [ ] Connection pooling is configured properly
- [ ] Large result sets are paginated

### Transaction Management
- [ ] Transactions are used for related operations
- [ ] Transaction boundaries are appropriately set
- [ ] Deadlock scenarios are considered
- [ ] Rollback behavior is properly implemented
- [ ] Long-running transactions are avoided

API Endpoints

  • Primary Focus: Input validation, error handling, security

  • Key Questions:

    • Is input validation comprehensive and secure?

    • Are all endpoints properly authenticated and authorised?

    • Is error handling consistent and secure?

    • Are responses properly formatted and sanitised?

    • Is the API design RESTful and consistent?

API Endpoint Review Checklist:

## API Endpoint Review Checklist

### Input Validation
- [ ] All inputs are validated for type, format, and range
- [ ] Validation occurs before processing
- [ ] Validation errors return appropriate status codes
- [ ] Complex validations are comprehensive
- [ ] Input size limits are enforced

### Authentication & Authorisation
- [ ] Authentication is required where appropriate
- [ ] Authorisation checks are in place
- [ ] API keys or tokens are properly validated
- [ ] Rate limiting is implemented
- [ ] Sensitive operations have additional protection

### Response Handling
- [ ] Responses follow consistent format
- [ ] Error responses don't leak sensitive information
- [ ] Status codes are used appropriately
- [ ] Response data is sanitised
- [ ] Pagination is implemented for large responses

UI Components

  • Primary Focus: Accessibility, user experience, security

  • Key Questions:

    • Are accessibility standards followed?

    • Is user input properly validated and sanitised?

    • Are UI state transitions handled properly?

    • Is error presentation helpful and secure?

    • Does the component follow design system guidelines?

UI Component Review Checklist:

## UI Component Review Checklist

### Accessibility
- [ ] Semantic HTML is used appropriately
- [ ] ARIA attributes are correctly implemented
- [ ] Color contrast meets WCAG standards
- [ ] Keyboard navigation is supported
- [ ] Screen reader compatibility is maintained

### Input Handling
- [ ] Client-side validation is implemented
- [ ] Input sanitisation prevents XSS
- [ ] Form submission handles errors gracefully
- [ ] Input feedback is clear and immediate
- [ ] Default values and placeholders are appropriate

### State Management
- [ ] Component state is managed efficiently
- [ ] Loading and error states are handled
- [ ] UI updates correctly reflect data changes
- [ ] Edge cases (empty states, long content) are handled
- [ ] Performance remains acceptable with realistic data volumes

Code Review Process Integration

Integrate AI-generated code review into your development workflow:

Pre-Review Preparation

Actions before the formal review begins:

  1. Prompt Sharing: Share the original prompt with reviewers

  2. Context Documentation: Provide requirements and integration context

  3. Review Focus Guidance: Highlight areas needing special attention

  4. Tool Configuration: Set up appropriate code review tools

Review Workflow

Step-by-step process for conducting the review:

  1. Context Review: Reviewers examine prompt and requirements

  2. Layered Examination: Reviewers apply the layered approach

  3. Documentation Review: Assess accompanying documentation

  4. Issue Documentation: Document findings with clear recommendations

  5. Discussion: Collaborative discussion of complex issues

  6. Resolution Planning: Prioritise and plan issue resolution

Post-Review Actions

Steps after the review is complete:

  1. Resolution Implementation: Address identified issues

  2. Knowledge Capture: Document learnings in knowledge base

  3. Prompt Refinement: Update prompts based on review findings

  4. Process Improvement: Identify review process enhancements

  5. Verification: Confirm issues have been properly addressed

Integration with Existing Tools

Leverage your current tools for AI-generated code review:

  • GitHub/GitLab Pull Requests: Use specialised templates for AI code

  • Code Review Tools: Configure for AI-specific concerns

  • Automated Scanning: Add AI-specific checks to automated tools

  • Documentation Systems: Connect review findings to knowledge base

GitHub Pull Request Template Example:

## AI-Generated Code Review Request

### Generation Context
- Original Prompt: [Link or text]
- Requirements Addressed: [Description]
- Generation Iterations: [Number and description]
- Framework Components Used: [Prompt Engineering/Verification/etc.]

### Review Focus Areas
- [Specific area requiring attention]
- [Security considerations]
- [Performance concerns]
- [Integration points]

### Self-Verification
- [ ] I can explain how this code works line by line
- [ ] I've verified security considerations are addressed
- [ ] I've tested edge cases and failure scenarios
- [ ] I've documented design decisions and rationale

### Reviewer Guidance
Please apply the C.L.E.A.R. review framework with emphasis on [specific areas].

Special Review Considerations

Additional guidance for specific review scenarios:

High-Risk Component Review

For security-critical or high-impact components:

  • Pair Review: Two reviewers independently examine the code

  • Security Specialist Involvement: Include security team in review

  • Comprehensive Testing: Verify through extensive testing

  • External Validation: Consider external security review

  • Threat Modeling: Conduct focused threat modeling session

Large Volume Review

When reviewing substantial amounts of AI-generated code:

  • Chunking: Break review into manageable segments

  • Priority Focus: Start with highest-risk components

  • Multiple Reviewers: Distribute review responsibilities

  • Automated Assistance: Leverage automated tools extensively

  • Extended Timeline: Allow adequate time for thorough review

Cross-Team Review

When reviewers are from different teams than generators:

  • Enhanced Context: Provide more detailed background

  • Domain Knowledge Transfer: Ensure reviewers understand domain

  • Communication Channels: Establish clear communication paths

  • Terminology Alignment: Clarify team-specific terminology

  • Collaborative Sessions: Consider synchronous review sessions

Common Review Pitfalls

Be aware of these common pitfalls when reviewing AI-generated code:

1. Surface-Level Review

Pitfall: Reviewing only for syntax and style without deeper examination.

Prevention:

  • Apply the layered examination approach

  • Explicitly verify understanding of complex sections

  • Use checklists for thorough coverage

  • Allocate adequate time for in-depth review

2. Assumed Understanding

Pitfall: Assuming code is correct because it looks professional or comes from an AI.

Prevention:

  • Verbalise how the code works in your own words

  • Trace execution with test data

  • Question underlying assumptions

  • Verify security and edge case handling explicitly

3. Context Blindness

Pitfall: Reviewing code without understanding the requirements or system context.

Prevention:

  • Review the original prompt first

  • Understand the broader system integration

  • Clarify requirements before detailed review

  • Evaluate code in its operational context

4. Incomplete Security Review

Pitfall: Focusing on functionality while overlooking security implications.

Prevention:

  • Use security-specific checklists

  • Consider attack vectors systematically

  • Involve security specialists for critical components

  • Verify all input validation and authentication logic

5. Reviewer Fatigue

Pitfall: Reduced attention and thoroughness due to review volume.

Prevention:

  • Break reviews into manageable sessions

  • Alternate between different types of review activities

  • Use the layered approach to maintain focus

  • Leverage automated tools to reduce manual burden

Measuring Review Effectiveness

Track these metrics to gauge the effectiveness of your review process:

  1. Defect Detection Rate: Percentage of issues found during review vs. post-review

  2. Security Vulnerability Detection: Security issues identified in review vs. production

  3. Review Efficiency: Time invested in review relative to issues found

  4. Knowledge Improvement: Measure of understanding gained through review

  5. Prompt Improvement Rate: Enhancements to prompts resulting from review findings

Case Study: Review Impact

A financial technology team implementing the C.L.E.A.R. review framework for AI-generated code found:

  • Security vulnerabilities detected during review increased by 74%

  • Post-release defects decreased by 62% in reviewed components

  • Review process led to 43% improvement in prompt effectiveness

  • Knowledge preservation increased significantly through documented reviews

  • Review time decreased by 28% while maintaining quality

The team's systematic approach to context establishment and layered examination were key factors in their success.

Getting Started with Enhanced Reviews

Take these immediate actions to improve your AI-generated code reviews:

  1. Adopt the C.L.E.A.R. framework for your next AI code review

  2. Create component-specific review checklists for your technology stack

  3. Implement the layered examination approach

  4. Document and share effective review patterns

  5. Train your team on specialized AI code review techniques

Review Framework Customization

Adapt the framework to your specific context:

For Security-Critical Systems

Focus on comprehensive security verification:

  • Add specialised security review stages

  • Include threat modelling in the review process

  • Implement multi-reviewer approach for all critical components

  • Create detailed security checklists by component type

  • Document explicit security verification

For Rapid Development Environments

Balance thoroughness with development velocity:

  • Focus review efforts on highest-risk components

  • Automate routine aspects of review

  • Create risk-based review depth guidelines

  • Implement lightweight review for low-risk components

  • Develop efficient review templates

For Compliance-Governed Organisations

Address regulatory and compliance requirements:

  • Map review process to compliance requirements

  • Create auditable review documentation

  • Include compliance verification in review checklist

  • Establish evidence collection during review

  • Implement formal sign-off procedures

Next Steps

As you implement these review guidelines:

  • Explore Verification Protocols for comprehensive verification approaches

  • Learn about Security Checks for enhanced security verification

  • Discover Documentation Standards for preserving review knowledge

  • Review Team Collaboration for collaborative review approaches

Remember: Effective review of AI-generated code requires both technical rigor and contextual understanding. By implementing these specialised approaches, you'll significantly improve quality while maintaining development velocity.

PreviousFor EnterprisesNextSecurity Checks

Last updated 1 month ago