Skip to content

Prompt Engineering Patterns

Master the art of creating effective prompts that leverage Orchestre's dynamic intelligence. These patterns help you communicate intent clearly and get optimal results.

Core Principles

  1. Clarity Over Brevity - Be specific about what you want
  2. Context Over Assumption - Provide relevant context
  3. Intent Over Implementation - Describe the goal, not the how
  4. Iterative Over Perfect - Refine through interaction

Effective Prompt Patterns

Pattern: Intent-Driven Prompts

Principle: Focus on what you want to achieve, not how to achieve it

Poor: Implementation-focused

bash
/execute-task "Create a UserController with GET, POST, PUT, DELETE methods"

Better: Intent-focused

bash
/execute-task "Add user management with full CRUD operations"

Why it works:

  • Allows Orchestre to choose best implementation
  • Adapts to your project's patterns
  • Results in more idiomatic code

Pattern: Context-Rich Prompts

Principle: Provide relevant context for better results

Poor: Missing context

bash
/execute-task "Add authentication"

Better: Context-rich

bash
/execute-task "Add JWT authentication for our REST API, 
integrating with existing user model and middleware pattern"

Why it works:

  • Orchestre understands integration points
  • Maintains consistency with existing code
  • Reduces back-and-forth clarification

Pattern: Constraint-Based Prompts

Principle: Specify important constraints and requirements

Poor: No constraints

bash
/execute-task "Create a file upload feature"

Better: Clear constraints

bash
/execute-task "Create file upload feature with:
- 10MB size limit
- Image files only (jpg, png, webp)
- Virus scanning before storage
- CDN integration for serving"

Why it works:

  • Prevents assumptions
  • Ensures requirements are met
  • Guides implementation decisions

Discovery Prompts

Pattern: Exploratory Analysis

Use when: Starting a new feature or investigating options

bash
/orchestrate "Explore options for implementing [feature] considering:
- Current architecture constraints
- Performance requirements
- Security implications
- Maintenance burden"

Example:

bash
/orchestrate "Explore options for implementing real-time chat considering:
- Our current REST API architecture
- Need for <100ms message delivery
- End-to-end encryption requirements
- Long-term maintenance by small team"

Pattern: Comparative Analysis

Use when: Choosing between approaches

bash
/research "Compare [option1] vs [option2] for [use-case] considering:
- Implementation complexity
- Performance characteristics
- Maintenance requirements
- Team expertise"

Example:

bash
/research "Compare WebSockets vs Server-Sent Events for notifications considering:
- Implementation complexity in our Node.js stack
- Performance for 10k concurrent users
- Maintenance requirements
- Our team's real-time experience"

Planning Prompts

Pattern: Hierarchical Planning

Use when: Breaking down complex features

bash
/orchestrate "Plan [feature] with:
Top-level: [main components]
For each component:
  - Core functionality
  - Integration points
  - Testing strategy
  - Deployment considerations"

Example:

bash
/orchestrate "Plan e-commerce checkout with:
Top-level: Cart, Payment, Order, Notification
For each component:
  - Core functionality
  - Integration points
  - Testing strategy
  - Deployment considerations"

Pattern: Risk-Aware Planning

Use when: Working with uncertain requirements

bash
/orchestrate "Plan [feature] identifying:
- Technical risks and mitigation strategies
- Unknown requirements needing research
- Dependencies on external systems
- Fallback approaches for each risk"

Implementation Prompts

Pattern: Progressive Enhancement

Use when: Building features incrementally

bash
/execute-task "Implement basic [feature] with minimal functionality"
/execute-task "Enhance [feature] with [additional capability]"
/execute-task "Optimize [feature] for [specific requirement]"

Example:

bash
/execute-task "Implement basic search with exact string matching"
/execute-task "Enhance search with fuzzy matching and relevance scoring"
/execute-task "Optimize search for sub-100ms response time"

Pattern: Example-Driven Implementation

Use when: You have specific behavior in mind

bash
/execute-task "Implement [feature] that works like this:
Input: [example input]
Process: [what should happen]
Output: [expected output]
Edge cases: [special scenarios]"

Example:

bash
/execute-task "Implement discount calculation that works like this:
Input: Cart total $100, Coupon 'SAVE20'
Process: Apply 20% discount, max $50
Output: {original: 100, discount: 20, final: 80}
Edge cases: Expired coupons, minimum purchase requirements"

Review Prompts

Pattern: Focused Review

Use when: Concerned about specific aspects

bash
/review --focus "[specific concern]"

Examples:

bash
/review --focus "security vulnerabilities in authentication"
/review --focus "performance bottlenecks in database queries"
/review --focus "error handling completeness"

Pattern: Comparative Review

Use when: Ensuring consistency

bash
/review --compare "[reference implementation]"

Example:

bash
/review --compare "user service implementation for consistency"

Optimization Prompts

Pattern: Metric-Driven Optimization

Use when: Having specific performance goals

bash
/orchestrate "Optimize [feature] to achieve:
- Metric 1: [target]
- Metric 2: [target]
Current baseline: [measurements]"

Example:

bash
/orchestrate "Optimize API response time to achieve:
- P95 latency: <200ms
- Throughput: >1000 req/s
Current baseline: P95=500ms, Throughput=400 req/s"

Pattern: Constraint-Based Optimization

Use when: Working within limitations

bash
/execute-task "Optimize [feature] within constraints:
- Maximum memory: [limit]
- CPU budget: [limit]
- Cannot modify: [existing APIs]"

Composition Prompts

Pattern: Workflow Composition

Use when: Creating multi-step processes

bash
/compose-prompt "Create workflow for [goal]:
Prerequisites: [what's needed]
Steps:
1. [First major milestone]
2. [Second major milestone]
Success criteria: [how to verify]"

Example:

bash
/compose-prompt "Create workflow for user onboarding:
Prerequisites: User registered, email verified
Steps:
1. Profile setup (name, avatar, preferences)
2. Initial data import
3. Tutorial completion
4. First action prompt
Success criteria: User completes first meaningful action"

Pattern: Conditional Composition

Use when: Workflow depends on conditions

bash
/compose-prompt "Create adaptive workflow for [goal]:
If [condition1]: [path1]
If [condition2]: [path2]
Default: [default path]"

Learning Prompts

Pattern: Pattern Extraction

Use when: Identifying reusable patterns

bash
/learn "From implementing [feature], key patterns are:
- Pattern 1: [description]
- Pattern 2: [description]
Best practices discovered: [insights]
Potential improvements: [ideas]"

Pattern: Retrospective Learning

Use when: Completing major features

bash
/learn "Retrospective on [feature]:
What worked well: [successes]
Challenges faced: [difficulties]
Solutions found: [approaches]
Future recommendations: [advice]"

Advanced Patterns

Pattern: Multi-Perspective Prompts

Use when: Needing comprehensive analysis

bash
/orchestrate "Analyze [feature] from perspectives:
- User: [user concerns]
- Developer: [development concerns]
- Operations: [operational concerns]
- Business: [business concerns]"

Pattern: Scenario-Based Prompts

Use when: Planning for different situations

bash
/orchestrate "Design [feature] handling scenarios:
- Happy path: [normal flow]
- Error scenario: [failure handling]
- Edge case: [unusual situation]
- Scale scenario: [high load handling]"

Anti-Patterns to Avoid

Anti-Pattern: Overly Prescriptive

Avoid:

bash
/execute-task "Create file controllers/UserController.js with 
exactly these methods in this order..."

Better:

bash
/execute-task "Add user management endpoints following our patterns"

Anti-Pattern: Ambiguous Intent

Avoid:

bash
/execute-task "Make it better"

Better:

bash
/execute-task "Improve API response time by implementing caching"

Anti-Pattern: Missing Context

Avoid:

bash
/execute-task "Add the feature we discussed"

Better:

bash
/execute-task "Add password reset feature with email verification"

Prompt Debugging

When Prompts Don't Work

  1. Check Context

    bash
    /status
    /discover-context
  2. Add Specificity

    • Include examples
    • Specify constraints
    • Provide context
  3. Break It Down

    bash
    # Instead of one complex prompt
    /compose-prompt "Break down [complex task] into steps"
  4. Use Discovery

    bash
    /research "Best approach for [challenge]"

Quick Reference Card

Basic Formula

/command "action + target + context + constraints"

Examples

bash
# Feature Implementation
/execute-task "Add [feature] that [does X] using [approach] with [constraints]"

# Planning
/orchestrate "Plan [goal] considering [context] with phases for [milestones]"

# Review
/review --focus "[specific concern] in [component]"

# Learning
/learn "Key insight: [pattern] works well for [use case] because [reason]"

Practice Exercises

  1. Convert these poor prompts to effective ones:

    • "Add search"
    • "Make it faster"
    • "Fix the bug"
  2. Create prompts for:

    • Adding a notification system
    • Optimizing database queries
    • Implementing file uploads
  3. Practice composition:

    • Create a workflow for user registration
    • Plan a feature rollout
    • Design a testing strategy

Next Steps

Built with ❤️ for the AI Coding community, by Praney Behl