Guide
Integrate into your app
Two shapes put Lumin inside another product. A conversational surface where users type questions, and a button-driven feature where your UI sends a fixed prompt and renders structured data. Both run against the same MCP endpoint with the same API key.
When to use this
The two shapes
Pick the shape based on whether the user composes the question or your product does.
| Shape | User surface | Prompt source | Model output | Best for |
|---|---|---|---|---|
| Chat | Text input, message thread | Typed by the user | Free-form prose | Coaching, exploration, "ask your chart" |
| Button | Buttons, forms, dashboards | Hardcoded by your app | Structured JSON | Forecasts, embedded widgets, scheduled reports |
Both shapes share infrastructure. You can ship them in the same product, against the same Lumin API key, billed on the same tier.
The architecture
+----------------+
| Your frontend | user clicks a button or types a question
+--------+-------+
|
v
+----------------+
| Your backend | runs an LLM SDK with your model key
+--------+-------+
| mcp_servers: { url: mcp.lumin.guru/mcp, token: your Lumin key }
v
+----------------+
| Lumin MCP | 50 KP tools the model can call as needed
+----------------+What you bring
- Your model provider key. The model bill is yours.
- Your prompt design. The system prompt steers tool selection.
- Your UI. Lumin returns data; your product renders it.
- Your domain data. Combine the chart with the user's history, holdings, calendar.
What Lumin brings
- 50 KP tools: chart, dashas, transits, significators, timing.
- The math. Each call returns the components used to derive it.
- Auth and metering. Your Lumin API key counts each tool call against a tier.
Pattern 1: chat surface
Send the user's question to the model with Lumin's MCP server attached. The model picks tools from Lumin's catalog as it reasons. You stream the model's text back to your UI.
import Anthropic from "@anthropic-ai/sdk";
const anthropic = new Anthropic();
const response = await anthropic.beta.messages.create({
model: "claude-sonnet-4-5",
max_tokens: 4096,
mcp_servers: [
{
type: "url",
url: "https://mcp.lumin.guru/mcp",
name: "lumin",
authorization_token: process.env.LUMIN_API_KEY,
},
],
messages: [
{
role: "user",
content:
"I was born 1992-08-14 04:32 in Colombo (6.927, 79.861, +330). " +
"What dasha am I in and what does it suggest about money this month?",
},
],
betas: ["mcp-client-2025-04-04"],
});The model will call set_birth_profile, then get_smart_current_dasha, get_financial_analysis, and get_transit_advanced, weave the results, and return narrative text. Your UI renders the text in a chat bubble.
Pattern 2: button-driven feature
Same SDK call, but the prompt is fixed by your app and the system instruction asks for JSON. You parse the JSON and render it in your own components, with no chat surface visible to the user.
Worked example: a spending forecaster button inside an expense tracker.
import Anthropic from "@anthropic-ai/sdk";
const anthropic = new Anthropic();
const SYSTEM = `You forecast personal spending risk using KP astrology.
Use the Lumin tools to compute the user's current dasha, antardasha, and
upcoming financial transits for the requested window. Identify favourable
and cautious days.
Return ONE JSON object matching this schema. No prose, no markdown:
{
"favorable_windows": [
{ "start": "YYYY-MM-DD", "end": "YYYY-MM-DD", "score": 0.0, "reason": string }
],
"cautious_windows": [
{ "start": "YYYY-MM-DD", "end": "YYYY-MM-DD", "score": 0.0, "reason": string }
],
"summary": string
}`;
export async function forecastSpending(profile: BirthProfile, days = 30) {
const response = await anthropic.beta.messages.create({
model: "claude-sonnet-4-5",
max_tokens: 4096,
system: SYSTEM,
mcp_servers: [
{
type: "url",
url: "https://mcp.lumin.guru/mcp",
name: "lumin",
authorization_token: process.env.LUMIN_API_KEY,
},
],
messages: [
{
role: "user",
content: JSON.stringify({
birth_datetime: profile.birthDatetime,
latitude: profile.latitude,
longitude: profile.longitude,
utc_offset_minutes: profile.utcOffsetMinutes,
window_days: days,
}),
},
],
betas: ["mcp-client-2025-04-04"],
});
const text = response.content
.filter((b) => b.type === "text")
.map((b) => (b as { text: string }).text)
.join("");
return JSON.parse(text) as SpendingForecast;
}Your frontend renders favorable_windows as green strips on the calendar and cautious_windows as red strips. The user never sees a model and never sees Lumin. They see your product.
Tip
Authentication
Generate an API key at app.lumin.guru/developer. The key starts with mcp_ and goes in the authorization_token field of the SDK config (or as Authorization: Bearer for raw HTTP). One key per application; rotate by deactivating and creating a new one.
Each tool call counts against your tier's daily limit. See rate limits for the per-tier numbers and API key vs OAuth for when to use OAuth instead.
Where to next
- Tool reference to pick the tools to mention in your system prompt.
- Recover from errors for retry semantics on transient failures.
- Stream responses for streaming the model's text to your chat UI.