Skip to main content
Scriptonia uses specialized AI agents backed by LLMs (OpenRouter: GROK-4, Qwen 3, Gemini 2.5). They run in two modes:
  1. Sequential (stages 1–3): One agent per stage: IdeaProcessorIntentAnalyzerPlatformSelector. Each is a single LLM call; there is no agent-to-agent call in this path.
  2. Parallel (execution): ProductManager, FrontendDev, BackendDev, DevOpsEngineer, ToolManager generate files in parallel from platform templates. Coordination is via shared WorkflowContext, existingDeliverables, and agent system prompts. A2A (lib/a2a.ts) is available for richer interaction and tests.
Orchestration lives in lib/openai.ts and lib/platform-templates.ts; roles and communication flows are defined in lib/x402.ts (AGENT_ROLES).

Agent Lifecycle

Sequential agents (Idea, Intent, Platform)

  1. Invocation — API handler calls processIdea, refineIntent, or selectPlatform with user input and sessionId.
  2. LLM call — One OpenRouter createChatCompletion with a JSON response_format and agent-specific system + user prompts.
  3. Response — Parsed JSON is returned; the handler persists to WorkflowContext (DB or in-memory) and responds to the client.
  4. No long-lived state — These agents are stateless per request.

Execution agents

  1. InvocationexecuteX402Workflow in lib/openai is called from /api/execute with selectedPlatform, refinedRequirements, sessionId, agentsToExecute, and workflowContext.
  2. Template resolutiongeneratePlatformDeliverables in lib/platform-templates gets the platform config (getWebAppTemplate, etc.), walks folderStructure, and assigns each file to an agent via getAgentForFile.
  3. Parallel runs — Agents are grouped by name; each group runs in parallel (Promise.all). Within an agent, files are generated in batches (e.g. 5) with small delays.
  4. Per-file generationgenerateFileContentWithAgent builds a prompt from: agent system prompt, workflowContext, requirements, file, filePath, relatedFiles from existingDeliverables. It calls callOpenRouter (GROK). On failure, generateFileContent provides fallback.
  5. Post-processingdeduplicateContent regenerates duplicate deliverables; reviewDeliverables checks structure and completeness and sets review_passed / review_issues.
  6. Outputdeliverables and execution_result are stored in WorkflowContext and returned from /api/execute. AgentStat and Transaction are updated.

Agent Types

TypeAgentsStageImplementation
IdeaIdeaProcessoridea_inputprocessIdea in openai.ts
IntentIntentAnalyzerintent_analysisrefineIntent in openai.ts
PlatformPlatformSelectorplatform_selectionselectPlatform in openai.ts
PaymentPaymentValidatorpayment_processingvalidatePayment in solana.ts; not an LLM agent
ExecutionProductManager, FrontendDev, BackendDev, DevOpsEngineer, ToolManagerexecutiongenerateFileContentWithAgent in platform-templates.ts
CEO, X402Agent, AIEngineer appear in AGENT_ROLES and A2A; they are not wired into the execution file-generation path. AIEngineer can be used if templates or getAgentForFile are extended.

Agent Communication

  • Sequential: Only through WorkflowContext (idea → analysis → answers → refined requirements → platform options). No direct A2A.
  • Execution: Via workflowContext, existingDeliverables (related files), and fixed system prompts (e.g. “Coordinate with FrontendDev and BackendDev”). getAgentForFile ensures a clear file–agent mapping.
  • A2A: lib/a2a.ts provides sendA2ARequest, sendA2AResponse, shareA2AData, sendA2ANotification, and A2AWorkflowIntegration. Used in /api/test/a2a; can be integrated into execution for request/response or data-sharing between agents.