Prompt Handling Flow
Overview
This document explains the complete flow of how prompts are received, processed, and executed in Orchestre's MCP server architecture. Understanding this flow is essential for contributors and helps debug issues when they arise.
The Complete Flow
Step-by-Step Breakdown
1. User Input via MCP Protocol
When a user types a command like /orchestrate "Build a task manager", the AI tool (e.g., Claude Code) sends this to the MCP server via JSON-RPC:
{
"jsonrpc": "2.0",
"method": "prompts/get",
"params": {
"name": "orchestre-orchestrate",
"arguments": {
"goal": "Build a task manager"
}
},
"id": "1234"
}2. MCP Server Reception
The MCP server (src/server.ts) receives the request and routes it to the prompt system:
// Simplified from src/server.ts
server.setRequestHandler(ListPromptsRequestSchema, async () => {
return {
prompts: getAllPrompts() // Returns all registered prompts
};
});
server.setRequestHandler(GetPromptRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
return await handlePrompt(name, args);
});3. Prompt Router
The router (src/prompts/handlers/index.ts) determines which handler to invoke:
export async function handlePrompt(name: string, args?: any) {
// Remove MCP prefix if present
const promptName = name.replace('orchestre-', '');
// Check if it's the template executor
if (promptName === 'template' || promptName === 't') {
return await templateExecutorHandler(args);
}
// Find the appropriate handler
const handler = promptHandlers[promptName];
if (!handler) {
throw new Error(`Unknown prompt: ${promptName}`);
}
return await handler(args);
}4. Context Gathering
Each prompt handler gathers necessary context before generating the final prompt:
// Example from orchestrate handler
export async function orchestrateHandler(args: { goal: string }) {
// 1. Analyze project structure
const projectInfo = await analyzeProjectStructure();
// 2. Detect template type
const template = await detectTemplate();
// 3. Read existing patterns
const patterns = await loadPatterns();
// 4. Check for existing plans
const existingPlans = await findExistingPlans();
// Context is now ready for prompt generation
}5. Prompt Template Loading
Prompts are loaded from their template definitions:
// From src/prompts/templates/setup/orchestrate.ts
export const orchestratePrompt = {
name: 'orchestrate',
description: 'Analyze project requirements and create an adaptive development plan',
template: `# Orchestre | Project Analysis & Planning
You are tasked with analyzing the following project goal and creating a comprehensive development plan...
## Project Goal
{{goal}}
## Current Context
- Template: {{template}}
- Existing Patterns: {{patterns}}
- Project Structure: {{projectStructure}}
`
};6. Variable Injection
The template variables are replaced with actual context:
function injectVariables(template: string, context: Record<string, any>): string {
let result = template;
for (const [key, value] of Object.entries(context)) {
const placeholder = new RegExp(`{{${key}}}`, 'g');
result = result.replace(placeholder, String(value));
}
return result;
}7. AI Model Invocation
The final prompt is sent to the AI model for processing:
async function callAI(prompt: string): Promise<string> {
// The prompt is sent to the AI coding assistant
// This happens through the MCP protocol
return {
messages: [
{
role: 'user',
content: {
type: 'text',
text: prompt
}
}
]
};
}8. Response Validation
For tools that return structured data, responses are validated:
// Example validation for project analysis
const responseSchema = z.object({
analysis: z.object({
complexity: z.enum(['simple', 'moderate', 'complex']),
estimatedDays: z.number(),
risks: z.array(z.string())
}),
plan: z.object({
phases: z.array(z.object({
name: z.string(),
tasks: z.array(z.string())
}))
})
});
const validatedResponse = responseSchema.parse(aiResponse);9. Output Formatting
The response is formatted and returned to the user:
return {
messages: [
{
role: 'assistant',
content: {
type: 'text',
text: formatResponse(validatedResponse)
}
}
]
};Special Cases
Template Executor Flow
The template executor adds an extra layer of routing:
async function templateExecutorHandler(args: { command?: string }) {
if (!args.command) {
// List available commands
return generateCommandList();
}
// Find template command
const templateCommand = await findTemplateCommand(args.command);
if (!templateCommand) {
// Generate suggestions using fuzzy matching
return generateSuggestions(args.command);
}
// Execute the template command
return await executeTemplateCommand(templateCommand, args);
}Multi-Step Prompts
Some prompts trigger multiple AI interactions:
// Example: generate-implementation-tutorial
async function buildSaasHandler(args: any) {
// Step 1: Analyze requirements
const analysis = await analyzeRequirements(args);
// Step 2: Generate architecture
const architecture = await generateArchitecture(analysis);
// Step 3: Create implementation plan
const plan = await createPlan(architecture);
// Step 4: Set up task tracking
await createTaskChecklists(plan);
return combinedResponse(analysis, architecture, plan);
}Error Handling
Errors are caught and formatted at each level:
try {
const result = await handler(args);
return result;
} catch (error) {
if (error instanceof ZodError) {
return formatValidationError(error);
}
if (error instanceof PromptError) {
return formatPromptError(error);
}
// Generic error handling
return {
messages: [{
role: 'assistant',
content: {
type: 'text',
text: `Error: ${error.message}\n\nPlease try again or report this issue.`
}
}]
};
}Performance Considerations
Caching
Frequently accessed data is cached:
const templateCache = new Map<string, Template>();
const patternCache = new Map<string, Pattern[]>();
async function loadTemplate(name: string): Promise<Template> {
if (templateCache.has(name)) {
return templateCache.get(name)!;
}
const template = await readTemplateFromDisk(name);
templateCache.set(name, template);
return template;
}Parallel Operations
Context gathering happens in parallel when possible:
const [projectInfo, patterns, existingPlans] = await Promise.all([
analyzeProjectStructure(),
loadPatterns(),
findExistingPlans()
]);Debugging Tips
1. Enable Debug Logging
Set environment variable:
DEBUG=orchestre:* npm run dev2. Trace Prompt Flow
Add logging at key points:
console.log('[Prompt]', promptName, args);
console.log('[Context]', gatheredContext);
console.log('[Final Prompt]', finalPrompt.substring(0, 200) + '...');3. Validate Intermediate Steps
Check each transformation:
- Raw user input
- Parsed arguments
- Gathered context
- Template before injection
- Final prompt after injection
- AI response
- Validated output
Future Enhancements
- Prompt Streaming: Stream long prompts for better performance
- Prompt Caching: Cache frequently used prompt combinations
- Prompt Metrics: Track execution time and success rates
- Prompt Versioning: Support multiple versions of prompts
- Prompt Testing: Automated testing of prompt outputs
Conclusion
Understanding the prompt flow is crucial for:
- Debugging issues
- Adding new prompts
- Optimizing performance
- Contributing to the project
The modular architecture allows each component to be tested and improved independently while maintaining a clean separation of concerns.
