Performance Best Practices
Optimize your Orchestre applications for speed, efficiency, and scalability. These practices apply to development workflow, generated code, and runtime performance.
Development Performance
1. Efficient Command Usage
Batch Operations
Instead of multiple sequential commands:
bash
# ❌ Slow: Sequential execution
/execute-task "Create user model"
/execute-task "Create user API"
/execute-task "Create user UI"
# ✅ Fast: Parallel execution
/orchestrate "Create complete user management feature with model, API, and UI"Use Specific Commands
bash
# ❌ Generic (requires more AI processing)
/execute-task "Add authentication"
# ✅ Specific (faster, better results)
/setup-oauth "Google and GitHub providers"2. Context Management
Minimize Context Size
bash
# ❌ Large context
/review # Reviews entire codebase
# ✅ Focused context
/review src/api/users.ts --securityCache Discoveries
bash
# First run discovers patterns
/learn
# Subsequent commands use cached patterns
/execute-task "Add new endpoint following patterns"3. Parallel Development
Leverage parallel workflows for complex features:
bash
/setup-parallel "frontend" "backend" "tests"
/distribute-tasks "
frontend: Build UI components
backend: Create API endpoints
tests: Write test scenarios
"Code Generation Performance
1. Template Optimization
Use Template-Specific Commands
bash
# MakerKit SaaS
/add-subscription-plan "Pro tier" # Optimized for Stripe
# Cloudflare Workers
/add-edge-function "Image optimization" # Edge-aware
# React Native
/add-offline-sync "User data" # Mobile-optimizedLeverage Template Knowledge
Templates include performance optimizations:
- Pre-configured bundling
- Optimized dependencies
- Performance patterns
2. Smart Code Organization
Code Splitting
typescript
// Automatic code splitting with dynamic imports
const HeavyComponent = lazy(() =>
import(/* webpackChunkName: "heavy" */ './HeavyComponent')
)
// Route-based splitting
const routes = [
{
path: '/dashboard',
component: lazy(() => import('./pages/Dashboard'))
}
]Tree Shaking
typescript
// ❌ Imports entire library
import * as utils from 'lodash'
// ✅ Imports only needed functions
import { debounce, throttle } from 'lodash-es'Runtime Performance
1. Database Optimization
Query Efficiency
typescript
// ❌ N+1 Query Problem
const users = await db.users.findMany()
for (const user of users) {
user.posts = await db.posts.findMany({ where: { userId: user.id } })
}
// ✅ Eager Loading
const users = await db.users.findMany({
include: {
posts: {
select: { id: true, title: true },
take: 5
}
}
})Indexing Strategy
sql
-- Add indexes for frequently queried fields
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_posts_user_date ON posts(user_id, created_at DESC);
CREATE INDEX idx_orders_status ON orders(status) WHERE status != 'completed';2. Caching Strategies
Multi-Level Caching
typescript
class CacheManager {
// L1: In-memory cache (microseconds)
private memoryCache = new Map()
// L2: Redis cache (milliseconds)
private redisCache: RedisClient
// L3: CDN cache (network latency)
private cdnCache: CDNClient
async get(key: string) {
// Check memory first
if (this.memoryCache.has(key)) {
return this.memoryCache.get(key)
}
// Check Redis
const redisValue = await this.redisCache.get(key)
if (redisValue) {
this.memoryCache.set(key, redisValue)
return redisValue
}
// Check CDN
const cdnValue = await this.cdnCache.get(key)
if (cdnValue) {
await this.promoteToRedis(key, cdnValue)
return cdnValue
}
return null
}
}Cache Invalidation
typescript
// Smart invalidation
class SmartCache {
async set(key: string, value: any, options: CacheOptions) {
const tags = options.tags || []
// Store with tags for grouped invalidation
await this.redis.set(key, value, {
ttl: options.ttl,
tags
})
}
async invalidateByTag(tag: string) {
const keys = await this.redis.getKeysByTag(tag)
await this.redis.del(...keys)
}
}
// Usage
cache.set('user:123', userData, { tags: ['user', 'user:123'] })
cache.set('user:123:posts', posts, { tags: ['user:123', 'posts'] })
// Invalidate all user:123 data
cache.invalidateByTag('user:123')3. API Optimization
Response Pagination
typescript
// Always paginate large datasets
app.get('/api/users', async (req, res) => {
const page = parseInt(req.query.page) || 1
const limit = Math.min(parseInt(req.query.limit) || 20, 100)
const offset = (page - 1) * limit
const [users, total] = await Promise.all([
db.users.findMany({ skip: offset, take: limit }),
db.users.count()
])
res.json({
data: users,
pagination: {
page,
limit,
total,
pages: Math.ceil(total / limit)
}
})
})Field Selection
typescript
// Allow clients to request only needed fields
app.get('/api/users/:id', async (req, res) => {
const fields = req.query.fields?.split(',') || []
const user = await db.users.findUnique({
where: { id: req.params.id },
select: fields.length > 0
? Object.fromEntries(fields.map(f => [f, true]))
: undefined // Select all if no fields specified
})
res.json(user)
})4. Frontend Optimization
Image Optimization
typescript
// Next.js Image component with optimization
import Image from 'next/image'
export function OptimizedImage({ src, alt }) {
return (
<Image
src={src}
alt={alt}
loading="lazy"
placeholder="blur"
quality={85}
sizes="(max-width: 768px) 100vw,
(max-width: 1200px) 50vw,
33vw"
/>
)
}Virtual Scrolling
typescript
// For large lists
import { VariableSizeList } from 'react-window'
function VirtualList({ items }) {
return (
<VariableSizeList
height={600}
itemCount={items.length}
itemSize={index => items[index].height}
width="100%"
>
{({ index, style }) => (
<div style={style}>
<ListItem item={items[index]} />
</div>
)}
</VariableSizeList>
)
}Debouncing & Throttling
typescript
// Debounce search input
const SearchInput = () => {
const [query, setQuery] = useState('')
const debouncedSearch = useMemo(
() => debounce((q: string) => {
searchAPI(q)
}, 300),
[]
)
const handleChange = (e: ChangeEvent<HTMLInputElement>) => {
setQuery(e.target.value)
debouncedSearch(e.target.value)
}
return <input value={query} onChange={handleChange} />
}
// Throttle scroll events
useEffect(() => {
const throttledScroll = throttle(() => {
setScrollPosition(window.scrollY)
}, 100)
window.addEventListener('scroll', throttledScroll)
return () => window.removeEventListener('scroll', throttledScroll)
}, [])5. Edge Computing
Cloudflare Workers Optimization
typescript
// Cache at the edge
export default {
async fetch(request: Request, env: Env) {
const cacheKey = new Request(request.url, request)
const cache = caches.default
// Check cache
let response = await cache.match(cacheKey)
if (response) return response
// Generate response
response = await handleRequest(request, env)
// Cache successful responses
if (response.status === 200) {
response = new Response(response.body, response)
response.headers.set('Cache-Control', 'public, max-age=3600')
await cache.put(cacheKey, response.clone())
}
return response
}
}Geographic Distribution
typescript
// Route to nearest data center
const getDataCenter = (request: Request) => {
const country = request.headers.get('CF-IPCountry')
const dataCenters = {
US: 'https://us-api.example.com',
EU: 'https://eu-api.example.com',
ASIA: 'https://asia-api.example.com'
}
return dataCenters[getRegion(country)] || dataCenters.US
}Performance Monitoring
1. Metrics Collection
typescript
// Performance monitoring service
class PerformanceMonitor {
private metrics: Map<string, Metric[]> = new Map()
measure(name: string, fn: () => Promise<any>) {
const start = performance.now()
return fn().finally(() => {
const duration = performance.now() - start
this.record(name, duration)
})
}
record(name: string, value: number) {
if (!this.metrics.has(name)) {
this.metrics.set(name, [])
}
this.metrics.get(name)!.push({
value,
timestamp: Date.now()
})
// Send to monitoring service
if (this.metrics.get(name)!.length >= 100) {
this.flush(name)
}
}
getStats(name: string) {
const values = this.metrics.get(name) || []
return {
avg: average(values),
p50: percentile(values, 50),
p95: percentile(values, 95),
p99: percentile(values, 99)
}
}
}2. Web Vitals
typescript
// Monitor Core Web Vitals
import { getCLS, getFID, getLCP } from 'web-vitals'
function sendToAnalytics(metric: Metric) {
// Send to your analytics endpoint
fetch('/api/analytics', {
method: 'POST',
body: JSON.stringify({
name: metric.name,
value: metric.value,
rating: metric.rating,
delta: metric.delta
})
})
}
getCLS(sendToAnalytics)
getFID(sendToAnalytics)
getLCP(sendToAnalytics)3. Performance Budgets
json
// performance-budget.json
{
"bundles": {
"main.js": { "maxSize": "200KB" },
"vendor.js": { "maxSize": "300KB" }
},
"metrics": {
"firstContentfulPaint": { "max": 1500 },
"timeToInteractive": { "max": 3000 },
"totalBlockingTime": { "max": 300 }
}
}Optimization Workflow
1. Identify Bottlenecks
bash
# Use Orchestre's performance check
/performance-check --comprehensive
# Profile specific operations
/performance-check src/api/heavy-endpoint.ts2. Implement Optimizations
bash
# Let Orchestre optimize
/execute-task "Optimize database queries in user service"
/execute-task "Implement caching for product catalog"3. Verify Improvements
bash
# Compare before/after
/performance-check --compare-to mainCommon Performance Pitfalls
1. Premature Optimization
- Measure first, optimize second
- Focus on actual bottlenecks
- Consider maintainability
2. Over-Caching
- Cache invalidation complexity
- Memory usage
- Stale data issues
3. Micro-Optimizations
- Focus on algorithmic improvements
- Don't sacrifice readability
- Consider the big picture
Performance Checklist
- [ ] Database queries use proper indexes
- [ ] API responses are paginated
- [ ] Images are optimized and lazy-loaded
- [ ] JavaScript bundles are code-split
- [ ] Caching strategy implemented
- [ ] CDN configured for static assets
- [ ] Monitoring in place
- [ ] Performance budgets defined
- [ ] Load testing completed
- [ ] Edge computing utilized where appropriate
Next Steps
- Explore Security Guidelines
- Learn about Team Collaboration
- Read the Performance Optimization Tutorial
