ModelKit

Type-safe, runtime-overridable model configuration for your existing AI stack.

API keys and inference remain in your application (Vercel AI SDK, OpenRouter, or any provider). ModelKit provides the model ID and optional parameters at runtime, so the model used per feature can be changed without redeploying. Not a managed inference service. Supported backends: Hono, Express.

Problems ModelKit solves

  • Model choice is fixed in code or config. Changing which model a feature uses (e.g. for cost, quality, or an outage) means editing code or config and redeploying.
  • Only engineers can switch models. Product or ops can’t roll back or tune models without a deploy.
  • Feature and model IDs are untyped. Typos and invalid model IDs surface at runtime instead of at build time.

ModelKit moves model selection to runtime: your app calls getModel(featureId, fallback), gets the effective model ID (from overrides or fallback), and passes it to your existing SDK. Overrides are stored in Redis and editable via API or Studio—no redeploy required.

Why ModelKit

Type-safe

Generated feature IDs and 340+ OpenRouter model IDs; invalid IDs fail at compile time.

Runtime overrides

Change model and parameters per feature in production without redeploying.

Studio

Studio: visual interface to list and edit overrides so non-engineers can manage model selection.

How it works

getModel(featureId, fallback) → effective model ID

Pass that ID into your SDK (Vercel AI, OpenRouter, etc.) with your own API key. Lookup order: in-memory cache (60s) → Redis override → fallback. If Redis is unavailable, the fallback is used so the app continues to run.

Studio

Studio is a visual interface for your overrides: list, set, or clear them and view live SDK snippets. You mount it in your app and point it at the route that exposes the ModelKit API (e.g. /api/modelkit). Supported backends: Hono, Express.

Studio documentation →

Documentation