Type-safe, runtime-overridable model configuration for your existing AI stack.
API keys and inference remain in your application (Vercel AI SDK, OpenRouter, or any provider). ModelKit provides the model ID and optional parameters at runtime, so the model used per feature can be changed without redeploying. Not a managed inference service. Supported backends: Hono, Express.
ModelKit moves model selection to runtime: your app calls getModel(featureId, fallback), gets the effective model ID (from overrides or fallback), and passes it to your existing SDK. Overrides are stored in Redis and editable via API or Studio—no redeploy required.
Type-safe
Generated feature IDs and 340+ OpenRouter model IDs; invalid IDs fail at compile time.
Runtime overrides
Change model and parameters per feature in production without redeploying.
Studio
Studio: visual interface to list and edit overrides so non-engineers can manage model selection.
getModel(featureId, fallback) → effective model ID
Pass that ID into your SDK (Vercel AI, OpenRouter, etc.) with your own API key. Lookup order: in-memory cache (60s) → Redis override → fallback. If Redis is unavailable, the fallback is used so the app continues to run.
Studio is a visual interface for your overrides: list, set, or clear them and view live SDK snippets. You mount it in your app and point it at the route that exposes the ModelKit API (e.g. /api/modelkit). Supported backends: Hono, Express.