Concept
MCP protocol basics
The Model Context Protocol is a JSON-RPC dialect for AI clients to discover and invoke tools. Lumin speaks it over Streamable HTTP.
What MCP is
MCP is an open standard (governed by the Linux Foundation's Agentic AI Foundation as of 2026) for exposing tools to AI clients. Servers publish a tool list with schemas; clients call them by name with structured arguments. The protocol layer is JSON-RPC 2.0, the transport is HTTP or Streamable HTTP.
Three primitives
- Tools. Callable functions with structured input/output. Lumin exposes 50 of these.
- Resources. Readable content (files, snapshots). Not used by Lumin yet.
- Prompts. Pre-built prompt templates. Not used by Lumin.
The two methods you will use
tools/list returns the complete tool catalog with input schemas. Call it once after connecting to populate your client.
tools/call invokes a tool by name with arguments. Returns a content array (text and structured) on success, or an error object.
Streamable HTTP vs SSE
Lumin uses Streamable HTTP transport with enableJsonResponse: true. Each call is a single HTTP request with a single JSON response. We do not use SSE for individual tool calls because every Lumin tool returns in under 2 seconds.
Further reading
For the full spec, see modelcontextprotocol.io. For the SDKs we use server-side, see github.com/modelcontextprotocol.