Connect an AI Agent
Configure an MCP client, give it access to a patient via the four VFS tools, and walk through a real clinical question end-to-end.
Connect an AI Agent
In this tutorial you will connect a language model to Patient Memory using the Model Context Protocol (MCP). By the end, the agent will be able to browse a patient's conditions, read a condition story, and answer a clinical question that requires traversing relationships across the graph.
Prerequisites:
- A patient already ingested. Complete Ingest and Query a Patient first.
- Node.js 18+.
- An Anthropic API key set in your environment:
export ANTHROPIC_API_KEY="your-anthropic-key"Replace <workspace-id> with your Patient Memory workspace ID in the code below.
Time: ~20 minutes.
How the MCP server works
Patient Memory exposes an MCP endpoint at /mcp over HTTP with SSE streaming. When a client connects, the server assigns a session ID via the Mcp-Session-Id response header. Include that header on subsequent requests to reuse the session.
The server exposes four tools:
| Tool | Takes | Returns |
|---|---|---|
browse_patient | path: /patient/{id}/... | Directory listing or file preview |
read_patient | path, format?, token_budget? | File content |
search_patient | patientId, query | Matching paths with previews |
get_patient_info | patientId | Demographics and pipeline stats (JSON) |
The {id} segment in VFS paths is the VFS patient ID, which is the patient.id extracted from the ingested record, returned in the ingest response. This may differ from the registry key used at ingest time (demo-patient in the previous tutorial). If they differ, use the patient.id from the ingest response, or call GET /patients/{registryKey} to look it up.
Option A: Claude Desktop
Add Patient Memory as an MCP server in Claude Desktop's config file.
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"patient-memory": {
"command": "npx",
"args": ["mcp-remote", "https://api.<your-workspace-id>.clinia.cloud/mcp"]
}
}
}Restart Claude Desktop. The four Patient Memory tools (browse_patient, read_patient, search_patient, get_patient_info) will appear in the tool picker.
Try this prompt:
Is demo-patient's CKD being adequately monitored given that it's a complication of their diabetes?
Claude will call browse_patient to discover the conditions, read_patient to load the diabetes story (which includes the CKD complication and its monitoring labs), and synthesise an answer from the graph data.
Option B: Custom agent with the MCP SDK
Install the SDK:
npm install @modelcontextprotocol/sdk @anthropic-ai/sdkConnect the client
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { SSEClientTransport } from "@modelcontextprotocol/sdk/client/sse.js";
const transport = new SSEClientTransport(new URL("https://api.<workspace-id>.clinia.cloud/mcp"));
const mcp = new Client({ name: "clinical-agent", version: "1.0.0" });
await mcp.connect(transport);
const { tools } = await mcp.listTools();
console.log(tools.map((t) => t.name));
// → ["browse_patient", "read_patient", "search_patient", "get_patient_info"]Wire the tools to Claude
Pass the MCP tools directly to the Anthropic SDK and let Claude decide when to call them:
import Anthropic from "@anthropic-ai/sdk";
const anthropic = new Anthropic();
const mcpTools = tools.map((t) => ({
name: t.name,
description: t.description ?? "",
input_schema: t.inputSchema,
}));
async function runAgent(question: string): Promise<string> {
const messages: Anthropic.MessageParam[] = [{ role: "user", content: question }];
while (true) {
const response = await anthropic.messages.create({
model: "claude-opus-4-5",
max_tokens: 4096,
tools: mcpTools,
messages,
});
if (response.stop_reason === "end_turn") {
return response.content
.filter((b) => b.type === "text")
.map((b) => b.text)
.join("");
}
// Collect tool calls from this turn
const toolUses = response.content.filter((b) => b.type === "tool_use");
if (toolUses.length === 0) break;
// Execute each tool call against Patient Memory
const toolResults = await Promise.all(
toolUses.map(async (use) => {
if (use.type !== "tool_use") return null;
const result = await mcp.callTool({
name: use.name,
arguments: use.input as Record<string, unknown>,
});
return {
type: "tool_result" as const,
tool_use_id: use.id,
content: result.content
.filter((c) => c.type === "text")
.map((c) => c.text)
.join("\n"),
};
}),
);
// Append the assistant turn and tool results, then loop
messages.push({ role: "assistant", content: response.content });
messages.push({
role: "user",
content: toolResults.filter(Boolean) as Anthropic.ToolResultBlockParam[],
});
}
return "";
}Run a clinical question
const answer = await runAgent(
"For patient demo-patient: is the CKD being adequately monitored " +
"given that it's a complication of the diabetes?",
);
console.log(answer);
await mcp.close();What the agent does
The agent typically calls tools in this sequence:
browse_patient/patient/demo-patient/conditions/active(discovers available conditions)read_patient/patient/demo-patient/conditions/active/type_2_diabetes_mellitus/_story.md(reads the diabetes story, which includes the CKD complication and its monitoring labs)- Optionally:
read_patient/patient/demo-patient/conditions/active/chronic_kidney_disease/_story.mdfor additional detail - Synthesises the answer: CKD is flagged as a complication, eGFR is listed as the monitoring lab, last eGFR value and whether it's in range
The agent reaches this conclusion without any application code that knows about CKD or eGFR. It discovers the relationship from the condition story, which was assembled by the pipeline from the clinical knowledge base.
Understanding the tool signatures
browse_patient
path: /patient/{patientId}/conditions/activeThe patient ID in the path is the VFS patient ID, which is the value the server derived from the source records (the FHIR Patient.id, not your registry key). These usually match, but use get_patient_info to confirm if you're unsure.
read_patient
path: /patient/{patientId}/conditions/active/type_2_diabetes_mellitus/_story.md
format: "narrative" | "structured" | "compact" (default: narrative)
token_budget: number (optional, approximate)Use compact when the agent is scanning many items. Use token_budget to prevent a single read from consuming the full context window.
search_patient
patientId: "123" ← the VFS patient ID (patient.id from the ingest response)
query: "metformin"BM25 search over all entity names, codes, attribute text, and narrative content.
Next steps
- MCP Tools reference for full parameter documentation
- Virtual File System to understand the path schema your agent is navigating
- Read a condition story for a focused how-to on format and token budget choices