AI Integration Examples
Integrating AI capabilities into applications using Orchestre's prompt-driven workflows.
LLM Integration
Getting Started with AI Features
bash
# Create AI-powered app
orchestre create makerkit-nextjs ai-assistant
# Orchestre discovers your project and provides intelligent prompts:
"I'll analyze your project structure and suggest AI integration patterns.
Let me explore what you're building..."
# The prompts adapt based on what they find
"I see you're using Next.js with App Router. I'll suggest streaming
patterns that work well with React Server Components..."Memory Evolution During Development
markdown
# Initial state in CLAUDE.md:
Project: AI Assistant
Status: Initialized
Goal: Build conversational AI with streaming responses
# After first prompt:
## AI Integration Decisions
- Using Vercel AI SDK for streaming
- OpenAI GPT-4 for main model
- Implemented backpressure handling
- Token usage tracking added
# After optimization:
## Performance Insights
- Streaming reduces perceived latency by 60%
- Chunking at 20 tokens optimal for UX
- Redis caching saves 40% on repeated queriesBuilding AI Features
bash
# Prompts discover and adapt:
"Add AI chat assistant with streaming responses"
# Orchestre analyzes your codebase:
"I found your existing API structure. I'll integrate the chat endpoint
following your established patterns in /app/api/..."
# Memory captures the implementation:
/features/chat/CLAUDE.md:
- Streaming implementation details
- Error boundary strategies
- Rate limiting approachNatural Language Processing
Context-Aware NLP Features
bash
# Add sentiment analysis
"Implement customer feedback sentiment analysis"
# Orchestre discovers existing data flows:
"I see you have a feedback collection system. I'll add sentiment
analysis that integrates with your existing Postgres schema..."
# Uses resource URIs:
orchestre://project/features/feedback/sentiment
├── Implementation details
├── Model selection rationale
└── Accuracy benchmarksMulti-Language Support
bash
# Adaptive translation features
"Add multi-language support with AI translation"
# Prompt explores internationalization needs:
"Checking your current i18n setup... I found next-intl configuration.
I'll extend it with on-demand AI translation for user content..."
# Memory evolution:
/i18n/CLAUDE.md:
Before: Static translations only
After: Hybrid static + AI translation
Learned: Cache AI translations for cost efficiencyComputer Vision
Image Processing Workflows
bash
# AI-powered image features
"Implement AI image tagging system"
# Discovery phase:
"Analyzing your media handling... Found sharp for processing.
I'll add AI tagging that works with your existing upload flow..."
# Resource tracking:
orchestre://project/features/media/ai-tagging
├── Model: ResNet for classification
├── Processing: Queue-based with BullMQ
└── Storage: Tags in Postgres jsonbPrivacy-Preserving Features
bash
"Add privacy-preserving face blur for uploaded images"
# Prompt adapts to compliance needs:
"I notice GDPR compliance markers in your codebase. I'll implement
face detection with immediate blur, no face data stored..."
# Compliance documented:
/compliance/CLAUDE.md:
- Face detection runs client-side when possible
- Server processing uses ephemeral memory only
- Audit log tracks processing without storing biometric dataAI Search
Semantic Search Implementation
bash
# Vector search with embeddings
"Implement semantic search for documentation"
# Intelligent exploration:
"Found your docs in /content/. I'll analyze the structure and suggest
an embedding strategy that preserves your information architecture..."
# Evolution tracked:
/search/CLAUDE.md:
v1: Keywords only
v2: Added embeddings with OpenAI
v3: Hybrid search with BM25 + vectors
Performance: 3x better relevance scoresRAG System
bash
"Build retrieval-augmented generation for support docs"
# Context-aware implementation:
"I see your support ticket system. I'll create a RAG pipeline that
suggests relevant docs before agents respond..."
# Memory captures insights:
orchestre://project/features/support/rag
├── Chunking: 512 tokens with 50 token overlap
├── Reranking: Cross-encoder for top 10
└── Prompt: Few-shot with role examplesVoice Integration
Speech Processing
bash
# For React Native apps
"Add voice command integration"
# Platform-aware implementation:
"Detected React Native project. I'll implement voice commands using
platform-specific APIs with a unified interface..."
# Cross-platform patterns captured:
/mobile/voice/CLAUDE.md:
iOS: SFSpeechRecognizer with privacy prompts
Android: SpeechRecognizer with offline fallback
Shared: Command grammar in TypeScriptAI Workflows
Intelligent Automation
bash
"Create automated document classification system"
# Discovers document types:
"Analyzing your document patterns in /uploads/... Found invoices,
contracts, and reports. I'll create classifiers for each..."
# Workflow memory:
orchestre://project/workflows/document-classification
├── Pipeline stages documented
├── Confidence thresholds per type
├── Human-in-the-loop triggers
└── Performance metrics trackedBest Practices Discovered
Performance Patterns
The memory system captures what works:
markdown
/ai/performance/CLAUDE.md:
- Stream tokens for chat (20 token chunks optimal)
- Batch embeddings (100 docs at once)
- Cache at embedding level, not response level
- Use edge functions for <100ms latencyCost Management Evolution
markdown
/ai/costs/CLAUDE.md:
Initial: $500/day with GPT-4
Optimized: $50/day with:
- GPT-3.5 for classification
- GPT-4 only for complex queries
- Embedding cache (90% hit rate)
- Request batchingSafety Measures Learned
markdown
/ai/safety/CLAUDE.md:
Discovered through incidents:
- Content filter before AND after generation
- Rate limit by user AND IP
- Prompt injection detection patterns
- Output schema validationIntegration Examples
With Existing Systems
bash
"Integrate AI features with our CRM"
# Discovers integration points:
"Found Salesforce API client in /lib/. I'll add AI-powered lead
scoring that syncs with your existing pipeline..."
# Integration patterns:
orchestre://project/integrations/ai-crm
├── Webhook listeners for CRM events
├── AI scoring on lead updates
└── Sync results back to SalesforceAPI Patterns
bash
"Create AI API endpoints following our standards"
# Learns from codebase:
"I see you use REST with /api/v1/ pattern and Zod validation.
I'll create AI endpoints that match your conventions..."
# Standards captured:
/api/standards/CLAUDE.md:
- POST /api/v1/ai/complete - Text completion
- POST /api/v1/ai/embed - Generate embeddings
- GET /api/v1/ai/models - List available models
All with standard error format and rate limitingTesting AI Features
Intelligent Test Generation
bash
"Write tests for AI features"
# Discovers testing patterns:
"Found Vitest setup with MSW for mocking. I'll create tests that
mock AI responses while preserving streaming behavior..."
# Test insights:
/ai/testing/CLAUDE.md:
- Mock at SDK level, not HTTP
- Test streaming with async iterators
- Deterministic seeds for embeddings
- Snapshot testing for promptsSee Also
orchestre://templates/makerkit/ai-patterns- Template-specific AI patternsorchestre://knowledge/ai-streaming- Streaming implementation guideorchestre://patterns/cost-optimization- AI cost management strategies
