TL;DR: MCPaaS delivers persistent AI context through the Model Context Protocol. Built on Cloudflare Workers with a 2.7KB Zig-WASM engine. 300+ edge locations. Sub-millisecond cold starts. Your project DNA, available to any AI that asks.

The Problem We're Solving

Every AI conversation starts from zero.

You open Claude. You explain your project. The stack. The architecture. The constraints. The AI helps you solve a problem. Session ends.

Next day, you open Claude again. You explain your project. Again. The stack. Again. The architecture. Again.

This isn't a minor inconvenience. It's a fundamental infrastructure gap.

  • Developers re-explain projects dozens of times per week
  • Teams duplicate onboarding for every AI tool
  • Enterprises can't standardize AI context across platforms
  • AI assistants forget everything between sessions

Context is trapped. In chat windows. In session storage. In platform silos.

Before MCPaaS

Session 1
AI knows context
โ†’
Session 2
AI knows NOTHING
โ†’
Session 3
AI knows NOTHING

After MCPaaS

Session 1
Calls MCPaaS
Session 2
Calls MCPaaS
Session 3
Calls MCPaaS
.faf
Your Context

The Solution: Context as a Service

MCPaaS makes context an endpoint.

Your project DNAโ€”the things that make your project your projectโ€”lives at a URL. Any MCP-compatible AI can request it. Context persists across sessions, platforms, and time.

From chat-bound context to endpoint-accessible context. From session memory to eternal memory. From platform lock-in to universal portability.

What is MCP?

MCP (Model Context Protocol) is Anthropic's open standard for connecting AI assistants to external data sources and tools. Think of it as USB for AIโ€”a universal interface that lets any compatible AI talk to any compatible service.

Released in late 2024, MCP is now supported by:

  • Claude Desktop and Claude Code
  • Various IDE integrations
  • A growing ecosystem of third-party tools

MCPaaS is an MCP server that serves one thing: your project context. But it serves it from the edge, globally, with sub-millisecond latency.

The Architecture

RuntimeCloudflare Workers (V8 isolates)
Scoring EngineZig compiled to WebAssembly (2.7KB)
StorageCloudflare KV (globally replicated)
AuthenticationOAuth 2.0 via Auth0
ProtocolMCP (Model Context Protocol)
FormatFAF (IANA application/vnd.faf+yaml)
Edge Locations300+ worldwide
๐Ÿค–
Claude
๐ŸŠ
Grok
๐Ÿ’Ž
Gemini
MCP Protocol
Cloudflare Edge
300+ locations
Worker V8 Isolate
Zig-WASM 2.7KB Engine
Read/Write
Cloudflare KV
Global Replication
๐Ÿ“„ project.faf

Why Zig-WASM?

Serverless has a dirty secret: cold starts. The first request to a dormant function can take hundreds of milliseconds while the runtime initializes. For context that AI needs immediately, that's unacceptable.

Our solution: compile the scoring engine in Zig and ship it as WebAssembly.

  • 2.7KB total binary size โ€” loads instantly
  • No JavaScript runtime overhead โ€” pure computation
  • No Node.js dependencies โ€” no node_modules, no bundling
  • Deterministic performance โ€” same speed every time

"Mass was always the enemy." โ€” The Zig philosophy

The result: sub-millisecond cold starts. Your AI gets context before the user finishes typing.

Why Cloudflare Workers?

Traditional serverless (Lambda, Cloud Functions) runs in a few regions. Your users in Tokyo wait for a round-trip to us-east-1. Cloudflare Workers run at the edgeโ€”300+ locations worldwide. Context is served from the nearest point of presence.

  • Zero-scale infrastructure โ€” pay only for invocations
  • No capacity planning โ€” scales to zero, scales to millions
  • Global by default โ€” deployed everywhere automatically
  • V8 isolates โ€” lighter than containers, faster than VMs

The context follows the user, not the other way around.

How It Works

1. Store Your Context

Upload your .faf fileโ€”a YAML document containing your project's DNA:

# project.faf - Your project DNA
project:
  name: "my-awesome-app"
  goal: "Real-time collaboration platform"
  main_language: "TypeScript"

stack:
  frontend: "React 18"
  backend: "Node.js + Express"
  database: "PostgreSQL"
  hosting: "Vercel"

human_context:
  who: "3-person startup team"
  what: "Building Figma for spreadsheets"
  why: "Excel is stuck in 1995"
  how: "WebSocket sync, CRDT conflict resolution"

2. AI Requests Context

Any MCP-compatible AI can request your context via the standard MCP protocol:

// MCP tool call
{
  "tool": "get_context",
  "arguments": {
    "project_id": "my-awesome-app"
  }
}

3. Context Delivered

MCPaaS returns your full project context, scored and validated:

{
  "context": { ... },      // Your .faf content
  "score": 94,             // AI-readiness score (0-100)
  "tier": "gold",          // Trophy tier
  "validated": true,       // Format compliance
  "latency_ms": 0.8        // Edge delivery
}

The AI now knows your project. No re-explaining. No context loss. Just work.

Who It's For

AI Tool Builders

Building Claude integrations, Grok applications, or multi-model orchestration? MCPaaS provides the context layer. One integration gives you persistent memory across all sessions. Stop building bespoke context managementโ€”use the standard.

Developer Teams

Onboarding AI to your codebase shouldn't take 20 minutes of pasting README files. MCPaaS serves your project.faf to any AI that asks. New team member? New AI tool? Same context, instantly.

Enterprise Architects

Evaluating MCP for production? MCPaaS is reference-grade infrastructure. IANA-registered format. USPTO trademark filed. Already submitted to the official Anthropic MCP Registry (PR #2759). This isn't a weekend projectโ€”it's infrastructure you can build on.

Platform Engineers

Need context delivery without managing infrastructure? MCPaaS is zero-scale by design. No Kubernetes. No container orchestration. No cold start tuning. It just works.

The Credentials

We're not asking you to trust a random npm package. MCPaaS is built on a foundation of verified credentials:

  • MCP Registry โ€” PR #2759 submitted to modelcontextprotocol/servers
  • IANA Registration โ€” application/vnd.faf+yaml is an official media type
  • USPTO Trademark โ€” Serial No. 99596802 (filed January 15, 2026)
  • FAF Foundation โ€” Published at foundation.faf.one
  • 21,000+ npm downloads โ€” faf-cli, claude-faf-mcp, faf-mcp, grok-faf-mcp

Try It

MCPaaS Live

Production endpoint

mcpaas.live

GitHub

Source code

faf-mcpaas

FAF CLI

Create .faf files

npm install -g faf-cli

Documentation

Full spec

faf.one

The Numbers

300+
Edge locations
2.7KB
Zig-WASM binary
<1ms
Cold start
20K+
npm downloads

What's Next

MCPaaS is live and production-ready. Coming soon:

  • Team workspaces โ€” shared context across organizations
  • Version history โ€” context evolution over time
  • Analytics โ€” see how AI uses your context
  • More MCP tools โ€” beyond context retrieval

We're also working with early adopters building voice interfaces, multi-model orchestration, and enterprise AI deployments. If you're pushing the boundaries of what MCP can do, we want to hear from you.