Vibe Coding Framework
  • 💻Introduction
  • 🧠Getting Started
    • Guide for Project Managers
    • Guide for System Owners
  • 🫣Dunning-Kruger Effect
  • Document Organisation
  • Core Concepts
    • What is Vibe Coding
  • Benefits and Challenges
  • Framework Philosophy
  • Security Tools
  • Framework Components
    • Prompt Engineering System
    • Verification Protocols
    • Security Toolkit
    • Documentation Generator
  • Refactoring Tools
  • Team Collaboration
  • Implementation Guide
    • For Individual Developers
  • For Engineering Teams
  • For Enterprises
  • Best Practices
    • Code Review Guidelines
  • Security Checks
  • Documentation Standards
  • Collaboration Workflows
  • Case Studies
    • Success Stories
  • Lessons Learned
  • Examples
    • Enterprise Case Study: Oracle Application Modernisation
    • Local email processing system
  • Resources
    • Tools and Integrations
      • Tools and Integrations Overview
      • Local LLM Solutions
      • Prompt Management Systems
  • Learning Materials
    • Test Your knowledge - Quiz 1
    • Test your knowledge - Quiz 2
  • Community Resources
  • Document Templates
    • AI Assisted Development Policy
    • AI Prompt Library Template
    • AI-Generated Code Verification Report
    • Maintainability Prompts
    • Security-Focused Prompts
    • Testing Prompts
    • [Language/Framework]-Specific Prompts
  • Framework Evolution
    • Versioning Policy
    • Contribution Guidelines
  • Roadmap
  • Glossary of terms
  • Patreon
    • Patroen Membership
  • Contact and Social
  • CREDITS
    • Different tools were used to build this site. Thanks to:
  • The Founder
Powered by GitBook
On this page
  • Vibe Coding Framework Glossary
  • A
  • B
  • C
  • D
  • E
  • F
  • G
  • K
  • L
  • M
  • P
  • R
  • S
  • T
  • V
  • W
  • Other Terms

Glossary of terms

Vibe Coding Framework Glossary

A

AI-Assisted Development: The practice of using artificial intelligence tools to aid in software development tasks, including code generation, refactoring, testing, and documentation.

AI Champion: A designated team member responsible for maintaining prompt libraries, promoting best practices, and supporting other team members in effective AI tool usage.

AI Governance Council: A cross-functional group that establishes policies, standards, and guidelines for AI-assisted development across an organization.

AI Interaction Log: Documentation that captures the history of interactions with AI tools, including prompts, responses, and refinements.

Augmentation, Not Replacement: A core principle of the Vibe Coding Framework emphasizing that AI tools should enhance human capabilities rather than replace human judgment or understanding.

B

Balanced Pragmatism: A philosophical principle of the framework that advocates for pragmatic implementation based on specific contexts rather than rigid adherence to dogmatic approaches.

C

C.L.E.A.R. Review Framework: A structured approach to reviewing AI-generated code consisting of Context Establishment, Layered Examination, Explicit Verification, Alternative Consideration, and Refactoring Recommendations.

C.O.D.E.S. Collaboration Model: A structured approach to team collaboration in AI-assisted development following five components: Collective Prompt Engineering, Open Verification Process, Distributed Knowledge Preservation, Established Governance, and Skill Development Balance.

Center of Excellence: A specialized team providing guidance, standards, and support for AI-assisted development across an organization.

Collective Prompt Engineering: The practice of transforming prompting from an individual activity to a team discipline through shared repositories, collaborative refinement, and knowledge sharing.

Component Documentation: Comprehensive documentation that captures the design decisions, implementation details, and context of AI-generated components.

Comprehension Gap: The potential disconnect between generating code and fully understanding how it works, a key risk addressed by the framework's verification protocols.

Constraint-Based Prompting: An advanced prompting technique that explicitly defines limitations and requirements that the AI-generated code must adhere to.

Continuous Learning: A principle of the framework emphasizing ongoing education, adaptation, and refinement of AI-assisted development practices.

D

D.O.C.S. Methodology: A structured approach to documentation for AI-generated code that covers Design Decisions, Operational Context, Code Understanding, and Support Information.

Dependence Risk: The potential for developers to become overly reliant on AI tools, potentially leading to skill atrophy in areas handled by AI.

Design Decision Record: Documentation that captures the reasoning behind architectural and implementation choices in AI-generated code.

Distributed Knowledge Preservation: Ensuring that knowledge about AI-generated code is shared across the team rather than siloed with individual developers.

Documentation Generator: A component of the framework that automates and standardizes documentation for AI-generated code.

E

Edge Case Testing: Verification of AI-generated code's behavior with boundary values and unexpected inputs to ensure robustness.

Example-Driven Prompting: A technique that provides specific patterns and examples to guide AI in generating code that follows established conventions.

Expert Review: Specialized review of security-critical or complex AI-generated components by domain experts.

F

Format (in S.C.A.F.F.): The section of a prompt that defines the expected structure and style of the code to be generated.

Foundations (in S.C.A.F.F.): The section of a prompt that specifies security and quality requirements for the code to be generated.

Framework Components: The core elements of the Vibe Coding Framework, including Prompt Engineering System, Verification Protocols, Security Toolkit, Documentation Standards, and Refactoring Tools.

G

Governance Structure: The organizational hierarchy and decision-making framework for AI-assisted development, typically including executive sponsorship, committee oversight, and implementation teams.

K

Knowledge Base: A centralized repository of lessons learned, effective practices, and insights related to AI-assisted development.

Knowledge Preservation: A core principle of the framework emphasizing the importance of capturing and sharing understanding of AI-generated code.

Knowledge Sharing Session: Regular team meetings dedicated to exchanging insights, effective practices, and lessons learned in AI-assisted development.

L

Layered Examination: A code review approach that evaluates code in progressive levels, from high-level structure to detailed implementation.

Local LLM Solutions: Tools and platforms that enable running AI models locally for enhanced privacy, security, and offline capability.

M

Maintainability First: A core principle of the framework prioritizing long-term code maintainability even when using tools that excel at rapid initial development.

P

Pair Verification: A collaborative practice where two developers review AI-generated code together to ensure quality and shared understanding.

Pattern Recognition: The identification of design patterns, architectural approaches, and implementation strategies in AI-generated code.

Prompt Engineering System: A core component of the framework providing methodologies, templates, and best practices for crafting effective prompts.

Prompt Library: A structured repository of effective prompts categorized by purpose, technology, or component type.

Prompt Management System: Tools and platforms for creating, storing, versioning, and sharing prompts within a team or organization.

Prompt Refinement Process: The iterative approach to improving prompts based on the quality of generated code and specific requirements.

R

R.E.F.A.C.T. Methodology: A structured approach to refactoring AI-generated code consisting of Recognise Patterns, Extract Components, Format for Readability, Address Edge Cases, Confirm Functionality, and Tune Performance.

Refactoring Tools: A component of the framework providing methodologies, patterns, and techniques for transforming AI-generated code into maintainable, readable, and efficient solutions.

S

S.C.A.F.F. Prompt Structure: A structured format for creating effective prompts consisting of Situation, Challenge, Audience, Format, and Foundations.

S.E.C.U.R.E. Verification Framework: A comprehensive approach to security verification for AI-generated code consisting of Surface Vulnerability Scanning, Evaluation Against Attack Scenarios, Control Verification, Unexpected Scenario Testing, Remediation Validation, and Expert Review.

S.H.I.E.L.D. Security Methodology: A security approach for AI-generated code consisting of Secure by Design Prompting, Hardening Review Process, Injection Prevention Patterns, Encryption and Data Protection, Least Privilege Enforcement, and Defence-in-Depth Strategy.

Security by Design: A core principle of the framework that integrates security considerations throughout the entire development process, from initial prompt construction through final verification.

Security Toolkit: A component of the framework providing specialized tools, techniques, and patterns to address the unique security challenges of AI-generated code.

Situation (in S.C.A.F.F.): The section of a prompt that establishes the development context, including project background, architecture, and technology stack.

Skill Development Balance: Ensuring equitable skill growth across a team by balancing AI-assisted development with human expertise development.

Surface Vulnerability Scanning: Automated security scanning to identify common vulnerabilities in AI-generated code.

T

Technical Debt Accumulation: The risk of building up design flaws, implementation shortcuts, and maintenance challenges through rapid AI-assisted development without appropriate structure.

Test-Driven Prompting: A technique that specifies expected behavior through test cases to guide AI in generating code that meets specific requirements.

Trust-But-Verify Principle: A foundational approach of the framework emphasizing thorough verification of AI-generated code before integration.

V

V.E.R.I.F.Y. Protocol Framework: A structured approach to verification of AI-generated code consisting of Verbalise, Examine Dependencies, Review Security Implications, Inspect Edge Cases, Functional Validation, and Yield Improvements.

Verification Level: A classification of verification depth (Basic, Standard, Enhanced) based on component criticality and risk.

Verification Protocols: A core component of the framework establishing systematic validation processes to ensure developers fully understand the code they're implementing.

Verification Session: A structured meeting where team members collaboratively review and verify AI-generated components.

W

Workflow Integration: The process of embedding framework practices into existing development workflows, including planning, implementation, review, and deployment processes.

Other Terms

90-Day Implementation Roadmap: A structured approach to implementing the framework over 90 days, consisting of Foundation, Proficiency, and Mastery phases.

12-Month Implementation Roadmap: A phased approach to implementing the framework across an enterprise, consisting of Foundation, Expansion, Standardisation, and Optimisation phases.

PreviousRoadmapNextPatroen Membership

Last updated 1 month ago