Quick Start

Install the SDK, configure your backend (Hono or Express), and optionally generate types. ModelKit supplies the model ID your app uses with your existing AI SDK; your API keys and inference stay in your code.

Not a managed inference service. Overrides are stored in Redis and editable via API or Studio so you can change which model a feature uses without redeploying.

1. Install

bash
npm install @benrobo/modelkit
# or: bun add @benrobo/modelkit

2. Backend (Hono)

typescript
import { Hono } from "hono";
import { createModelKit, createRedisAdapter } from "@benrobo/modelkit";
import { createModelKitHonoRouter } from "@benrobo/modelkit/hono";

// Persist overrides in Redis (defaults to localhost if REDIS_URL is unset)
const adapter = createRedisAdapter({
  url: process.env.REDIS_URL || "redis://localhost:6379",
});
const modelKit = createModelKit(adapter);

// getModel returns the effective model ID (string). Pass it to your AI SDK; override or fallback is resolved here.
const modelId = await modelKit.getModel("chatbot", "anthropic/claude-3.5-sonnet");

// Mount the ModelKit REST API and Studio under /api/modelkit
const app = new Hono();
app.route("/api/modelkit", createModelKitHonoRouter(modelKit));

3. Backend (Express)

typescript
import express from "express";
import { createModelKit, createRedisAdapter } from "@benrobo/modelkit";
import { createModelKitExpressRouter } from "@benrobo/modelkit/express";

const app = express();
app.use(express.json()); // Required for POST body parsing

// Persist overrides in Redis; createModelKit wraps the adapter with caching
const adapter = createRedisAdapter({ url: process.env.REDIS_URL });
const modelKit = createModelKit(adapter);

// Mount the ModelKit REST API and Studio at /api/modelkit
app.use("/api/modelkit", createModelKitExpressRouter(modelKit));
app.listen(3000);

4. Type safety

Generate types from your running API:

bash
npx modelkit-generate --api-url http://localhost:3000/api/modelkit
typescript
import type { FeatureId } from "./modelkit.generated";

const adapter = createRedisAdapter<FeatureId>({ url: "..." });
const modelKit = createModelKit(adapter);

await modelKit.getModel("chatbot", "anthropic/claude-3.5-sonnet"); // ✅
await modelKit.getModel("invalid", "gpt-4"); // ❌ compile error

For Studio, see Studio.

5. Examples — use the model ID with your SDK

getModel returns the effective model ID (string). Use getConfig(featureId) when you need the full override (temperature, maxTokens, etc.) instead of just the ID. Pass the ID to your SDK with your own API key; inference remains in your app.

Vercel AI SDK + OpenRouter

typescript
import { createOpenRouter } from "@openrouter/ai-sdk-provider";
import { streamText } from "ai";

// In your route: get effective model ID, then stream with your key
const modelId = await modelKit.getModel("chatbot", "anthropic/claude-3.5-sonnet");
const openrouter = createOpenRouter({ apiKey: process.env.OPENROUTER_API_KEY });

const result = streamText({
  model: openrouter(modelId),
  messages: [{ role: "user", content: "Hello" }],
});

OpenRouter HTTP API

typescript
const modelId = await modelKit.getModel("chatbot", "anthropic/claude-3.5-sonnet");

const res = await fetch("https://openrouter.ai/api/v1/chat/completions", {
  method: "POST",
  headers: {
    Authorization: `Bearer ${process.env.OPENROUTER_API_KEY}`,
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    model: modelId,
    messages: [{ role: "user", content: "Hello" }],
  }),
});