Tutorial 11: Performance Optimization Mastery
Learn advanced techniques for optimizing application performance using AI-driven analysis and implementation. Master the art of making applications blazing fast while maintaining code quality.
Learning Objectives
By the end of this tutorial, you'll:
- ✅ Conduct AI-powered performance analysis
- ✅ Implement systematic optimizations
- ✅ Create performance-aware architectures
- ✅ Build automated performance pipelines
- ✅ Master edge computing optimization
Prerequisites
- Completed previous tutorials
- Understanding of performance concepts
- 2 hours of time
AI-Driven Performance Optimization
Performance optimization with Orchestre:
- Intelligent analysis of bottlenecks
- Contextual solutions for your stack
- Automated implementation of fixes
- Continuous monitoring and improvement
- Predictive optimization for scale
Part 1: Performance Analysis
Comprehensive Performance Audit
/performance-check --comprehensiveThis triggers multi-dimensional analysis:
🚀 Comprehensive Performance Analysis
## Frontend Performance
- Bundle Size: 892KB (❌ Above 500KB target)
- First Contentful Paint: 2.3s (⚠️ Target: <1.5s)
- Time to Interactive: 4.1s (❌ Target: <3s)
- Cumulative Layout Shift: 0.08 (✅ Good)
## Backend Performance
- Average Response Time: 187ms (✅ Good)
- P95 Response Time: 1.2s (⚠️ Spikes detected)
- Database Query Time: 43ms avg (✅ Good)
- Slow Queries: 3 found (❌ Need optimization)
## Infrastructure
- CPU Usage: 45% average (✅ Good headroom)
- Memory Usage: 72% (⚠️ Getting high)
- Cache Hit Rate: 34% (❌ Too low)
- CDN Coverage: 67% (⚠️ Can improve)
## Critical Issues Found
1. Large JavaScript bundle blocking initial load
2. Unoptimized images (4.2MB total)
3. N+1 queries in user dashboard
4. Missing database indexes
5. No HTTP/2 push configuredAI-Powered Root Cause Analysis
/orchestrate "Analyze performance bottlenecks and create optimization plan"The AI will:
- Correlate multiple metrics
- Identify root causes
- Prioritize by impact
- Suggest specific fixes
- Estimate improvements
Part 2: Frontend Optimization
Bundle Size Optimization
/execute-task "Optimize JavaScript bundle size using code splitting and tree shaking"Implementation:
// Before: Everything in one bundle
import { everything } from 'huge-library'
// After: Dynamic imports with code splitting
const HeavyComponent = lazy(() =>
import(/* webpackChunkName: "heavy" */ './HeavyComponent')
)
// Route-based splitting
const routes = [
{
path: '/dashboard',
component: lazy(() => import('./pages/Dashboard'))
},
{
path: '/analytics',
component: lazy(() => import('./pages/Analytics'))
}
]
// Feature-based splitting
const loadChartLibrary = () =>
import(/* webpackChunkName: "charts" */ 'chart-library')Image Optimization
/execute-task "Implement comprehensive image optimization strategy"Multi-pronged approach:
// 1. Next.js Image Component with optimization
import Image from 'next/image'
export function OptimizedImage({ src, alt }) {
return (
<Image
src={src}
alt={alt}
width={800}
height={600}
loading="lazy"
placeholder="blur"
formats={['webp', 'avif']}
sizes="(max-width: 768px) 100vw,
(max-width: 1200px) 50vw,
33vw"
/>
)
}
// 2. Progressive loading with blur placeholder
const shimmer = (w: number, h: number) => `
<svg width="${w}" height="${h}" xmlns="http://www.w3.org/2000/svg">
<rect width="${w}" height="${h}" fill="#f3f4f6" />
<rect width="${w}" height="${h}" fill="url(#g)" />
<defs>
<linearGradient id="g">
<stop stop-color="#f3f4f6" offset="20%" />
<stop stop-color="#e5e7eb" offset="50%" />
<stop stop-color="#f3f4f6" offset="70%" />
</linearGradient>
</defs>
</svg>`
// 3. Responsive images with art direction
<picture>
<source
media="(min-width: 1024px)"
srcSet="/hero-desktop.webp"
type="image/webp"
/>
<source
media="(min-width: 768px)"
srcSet="/hero-tablet.webp"
type="image/webp"
/>
<img
src="/hero-mobile.jpg"
alt="Hero image"
loading="lazy"
/>
</picture>Critical Rendering Path
/execute-task "Optimize critical rendering path for faster initial paint"Advanced optimizations:
<!-- Preload critical resources -->
<link rel="preload" href="/fonts/main.woff2" as="font" crossorigin>
<link rel="preload" href="/css/critical.css" as="style">
<!-- Inline critical CSS -->
<style>
/* Critical above-the-fold styles */
:root { --primary: #0070f3; }
body { margin: 0; font-family: system-ui; }
.hero { height: 100vh; background: var(--primary); }
</style>
<!-- Defer non-critical CSS -->
<link rel="preload" href="/css/main.css" as="style"
onload="this.onload=null;this.rel='stylesheet'">
<!-- Resource hints -->
<link rel="dns-prefetch" href="https://api.example.com">
<link rel="preconnect" href="https://fonts.googleapis.com">Part 3: Backend Optimization
Database Query Optimization
/execute-task "Optimize N+1 queries and add missing indexes"Intelligent query optimization:
// Before: N+1 query problem
const users = await db.users.findMany()
for (const user of users) {
user.posts = await db.posts.findMany({
where: { userId: user.id }
})
}
// After: Eager loading with single query
const users = await db.users.findMany({
include: {
posts: {
select: {
id: true,
title: true,
createdAt: true
},
orderBy: { createdAt: 'desc' },
take: 5
}
}
})
// Advanced: Query result caching
const getCachedUsers = cache.wrap(
'users:list',
async () => {
return db.users.findMany({
include: { posts: true }
})
},
{ ttl: 300 } // 5 minutes
)API Response Optimization
/execute-task "Implement GraphQL with DataLoader for efficient data fetching"Advanced patterns:
// DataLoader for batching and caching
const userLoader = new DataLoader(async (userIds: string[]) => {
const users = await db.users.findMany({
where: { id: { in: userIds } }
})
return userIds.map(id => users.find(u => u.id === id))
})
// GraphQL resolver with DataLoader
const resolvers = {
Post: {
author: (post) => userLoader.load(post.authorId)
},
Query: {
posts: async () => {
const posts = await db.posts.findMany()
// DataLoader automatically batches author queries
return posts
}
}
}
// Field-level caching
const resolver = {
Query: {
expensiveCalculation: async (_, args, { cache }) => {
const key = `calc:${JSON.stringify(args)}`
return cache.get(key) || cache.set(key,
await performExpensiveCalculation(args),
{ ttl: 3600 }
)
}
}
}Edge Computing Optimization
/execute-task "Move compute-intensive operations to edge workers"Edge optimization patterns:
// Cloudflare Worker for image optimization
export default {
async fetch(request: Request, env: Env) {
const url = new URL(request.url)
const imageURL = url.searchParams.get('url')
const width = parseInt(url.searchParams.get('w') || '800')
// Check cache first
const cacheKey = `image:${imageURL}:${width}`
const cached = await env.CACHE.get(cacheKey, 'stream')
if (cached) return new Response(cached)
// Fetch and optimize
const response = await fetch(imageURL)
const image = await response.arrayBuffer()
// Resize at the edge
const resized = await resizeImage(image, width)
// Cache and return
await env.CACHE.put(cacheKey, resized)
return new Response(resized, {
headers: {
'Content-Type': 'image/webp',
'Cache-Control': 'public, max-age=31536000'
}
})
}
}Part 4: Advanced Optimization Techniques
Predictive Prefetching
/execute-task "Implement ML-based predictive prefetching"AI-driven prefetching:
// Predictive prefetching based on user behavior
class PredictivePrefetcher {
private model: TensorFlowModel
private userPatterns: Map<string, Pattern>
async predict(userId: string, currentPage: string) {
const pattern = this.userPatterns.get(userId)
const predictions = await this.model.predict({
currentPage,
timeOfDay: new Date().getHours(),
deviceType: detectDevice(),
historicalPattern: pattern
})
// Prefetch top 3 likely next pages
predictions
.slice(0, 3)
.forEach(page => this.prefetch(page.url))
}
private prefetch(url: string) {
// Use Intersection Observer for smart loading
if ('requestIdleCallback' in window) {
requestIdleCallback(() => {
const link = document.createElement('link')
link.rel = 'prefetch'
link.href = url
document.head.appendChild(link)
})
}
}
}Memory Optimization
/execute-task "Implement advanced memory management strategies"Memory-efficient patterns:
// 1. Object pooling for frequent allocations
class ObjectPool<T> {
private pool: T[] = []
private create: () => T
private reset: (obj: T) => void
acquire(): T {
return this.pool.pop() || this.create()
}
release(obj: T) {
this.reset(obj)
this.pool.push(obj)
}
}
// 2. Weak references for caching
class WeakCache<K extends object, V> {
private cache = new WeakMap<K, V>()
get(key: K, factory: () => V): V {
if (!this.cache.has(key)) {
this.cache.set(key, factory())
}
return this.cache.get(key)!
}
}
// 3. Streaming for large data
async function* streamLargeDataset(query: string) {
let offset = 0
const batchSize = 1000
while (true) {
const batch = await db.query(query, { offset, limit: batchSize })
if (batch.length === 0) break
yield* batch
offset += batchSize
// Allow GC between batches
await new Promise(resolve => setImmediate(resolve))
}
}Real-time Performance Monitoring
/setup-monitoring --performance-focusedComprehensive monitoring setup:
// Performance monitoring with Web Vitals
import { getCLS, getFID, getLCP, getFCP, getTTFB } from 'web-vitals'
function sendToAnalytics(metric: Metric) {
// Batch metrics for efficiency
metricsQueue.push({
name: metric.name,
value: metric.value,
rating: metric.rating,
delta: metric.delta,
id: metric.id,
navigationType: metric.navigationType,
url: window.location.href,
timestamp: Date.now()
})
if (metricsQueue.length >= 10) {
flushMetrics()
}
}
// Track all vital metrics
getCLS(sendToAnalytics)
getFID(sendToAnalytics)
getLCP(sendToAnalytics)
getFCP(sendToAnalytics)
getTTFB(sendToAnalytics)
// Custom performance marks
class PerformanceTracker {
mark(name: string) {
performance.mark(name)
}
measure(name: string, startMark: string, endMark: string) {
performance.measure(name, startMark, endMark)
const measure = performance.getEntriesByName(name)[0]
this.report(name, measure.duration)
}
async report(name: string, duration: number) {
await fetch('/api/metrics', {
method: 'POST',
body: JSON.stringify({ name, duration })
})
}
}Part 5: Performance Automation
Continuous Performance Testing
Create .orchestre/commands/performance-regression-test.md:
# /performance-regression-test
Automated performance regression testing for every change.
## Implementation
1. **Baseline Establishment**
- Run performance tests on main branch
- Store metrics as baseline
- Define acceptable variance (e.g., ±5%)
2. **Automated Testing**
For each PR:
- Run same performance tests
- Compare against baseline
- Flag regressions
- Block merge if critical
3. **Test Suite**
```typescript
describe('Performance Tests', () => {
it('should load homepage under 2s', async () => {
const metrics = await lighthouse(url, {
onlyCategories: ['performance']
})
expect(metrics.lhr.categories.performance.score)
.toBeGreaterThan(0.9)
})
it('should handle 1000 concurrent users', async () => {
const results = await loadTest({
url,
concurrent: 1000,
duration: '30s'
})
expect(results.medianLatency).toBeLessThan(200)
expect(results.errorRate).toBeLessThan(0.01)
})
})- Continuous Monitoring
- Real user metrics (RUM)
- Synthetic monitoring
- Alert on degradation
- Auto-rollback if needed
### Performance Budget Enforcement
```bash
/execute-task "Implement performance budget with automatic enforcement"Budget configuration:
{
"performanceBudget": {
"bundles": {
"main.js": { "maxSize": "200KB" },
"vendor.js": { "maxSize": "300KB" },
"*.css": { "maxSize": "50KB" }
},
"metrics": {
"LCP": { "max": 2500 },
"FID": { "max": 100 },
"CLS": { "max": 0.1 },
"TTI": { "max": 3000 }
},
"lighthouse": {
"performance": { "min": 90 },
"accessibility": { "min": 100 },
"seo": { "min": 100 }
}
}
}Real-World Case Study
E-commerce Site Optimization
Initial state:
- Load time: 6.2s
- Conversion rate: 1.2%
- Server costs: $5,000/month
/orchestrate "Optimize e-commerce site for Black Friday traffic"Orchestrated improvements:
Frontend Optimizations
- Implemented lazy loading: -2.1s
- Optimized images: -1.3s
- Code splitting: -0.8s
- Critical CSS: -0.5s
Backend Optimizations
- Query optimization: -40% response time
- Redis caching: -60% database load
- CDN implementation: -70% bandwidth
Infrastructure
- Auto-scaling configuration
- Edge workers for personalization
- Global distribution
Results:
- Load time: 1.5s (76% improvement)
- Conversion rate: 2.8% (133% increase)
- Server costs: $3,200/month (36% reduction)
- Black Friday: Zero downtime, 10x traffic handled
Practice Exercises
1. Full Stack Optimization
/orchestrate "Complete performance overhaul for our application"
/performance-check --before
# Implement optimizations
/performance-check --after --compare2. Mobile Performance
/execute-task "Optimize for mobile devices with 3G connections"
/performance-check --mobile --network=3g3. Database Performance
/analyze-database-performance
/execute-task "Implement query optimization recommendations"
/performance-check --database --verifyCommon Pitfalls
1. Premature Optimization
Measure first, optimize second.
2. Micro-Optimizations
Focus on big wins before small gains.
3. Not Measuring Impact
Always validate improvements with data.
4. Ignoring User Experience
Fast but broken is worse than slow but working.
What You've Learned
✅ Conducted AI-powered performance analysis ✅ Implemented systematic optimizations ✅ Created performance-aware architectures ✅ Built automated performance pipelines ✅ Mastered edge computing optimization
Next Steps
You're now a performance optimization expert!
Continue to: Enterprise Patterns →
Challenge yourself:
- Achieve 100/100 Lighthouse score
- Handle 1M concurrent users
- Reduce costs by 50%
Remember: Performance is a feature, not an afterthought!
