The meta-framework for LangGraph. Author agents and workflows as filesystem routes, get types and a local dev server for free, and ship to LangSmith with one command.
- Kill the graph boilerplate. Export one
agent({ model, systemPrompt })descriptor. Dawn discovers it, binds route-local tools, and emits alanggraph.jsonpackage ready for LangSmith. - Real project structure. Filesystem routes under
src/app/— colocate state schemas, tools, middleware, and tests next to the route they belong to. No more ad-hoc folders. - A local dev loop LangGraph never shipped.
dawn devruns your routes locally with the same semantics as production. Iterate in seconds, not deploys. - Typed end to end. Route params, state, and tool I/O are generated as TypeScript types.
dawn verifyis your pre-deploy gate.
Same langgraph.json, deployable to LangSmith. ~4× less code to author.
// graph.ts
import { StateGraph, MessagesAnnotation, START, END } from "@langchain/langgraph"
import { ToolNode } from "@langchain/langgraph/prebuilt"
import { ChatOpenAI } from "@langchain/openai"
import { tool } from "@langchain/core/tools"
import { z } from "zod"
const greet = tool(async ({ name }) => `Hello, ${name}!`, {
name: "greet",
description: "Greet a user by name.",
schema: z.object({ name: z.string() }),
})
const model = new ChatOpenAI({ model: "gpt-4o-mini" }).bindTools([greet])
const tools = new ToolNode([greet])
async function callModel(state: typeof MessagesAnnotation.State) {
return { messages: [await model.invoke(state.messages)] }
}
function shouldContinue(state: typeof MessagesAnnotation.State) {
const last = state.messages.at(-1) as any
return last?.tool_calls?.length ? "tools" : END
}
export const graph = new StateGraph(MessagesAnnotation)
.addNode("agent", callModel)
.addNode("tools", tools)
.addEdge(START, "agent")
.addConditionalEdges("agent", shouldContinue, ["tools", END])
.addEdge("tools", "agent")
.compile()// langgraph.json
{
"dependencies": ["."],
"graphs": { "hello": "./graph.ts:graph" },
"node_version": "22",
"env": ".env"
}// src/app/(public)/hello/[tenant]/index.ts
import { agent } from "@dawn-ai/sdk"
export default agent({
model: "gpt-4o-mini",
systemPrompt: "You are a helpful assistant for the {tenant} organization.",
})// src/app/(public)/hello/[tenant]/tools/greet.ts
export default async ({ name }: { name: string }) => `Hello, ${name}!`dawn build emits the langgraph.json for you.
- Create a new app.
pnpm create dawn-ai-app my-dawn-app
cd my-dawn-app
pnpm install- Validate the app and generate types in one call.
pnpm exec dawn verify- Run the scaffolded route. The route path must be quoted because it contains
(,), and[].
echo '{"tenant":"acme"}' | pnpm exec dawn run "src/app/(public)/hello/[tenant]"- Optionally start the local runtime in one terminal and send the same route through
--urlfrom another terminal.
pnpm exec dawn dev --port 3001
echo '{"tenant":"acme"}' | pnpm exec dawn run "src/app/(public)/hello/[tenant]" --url http://127.0.0.1:3001Dawn routes live under src/app and export one runtime entry. New agent routes should use the agent() descriptor from @dawn-ai/sdk; Dawn discovers the route, binds route-local tools, generates types, and produces a langgraph.json package for LangSmith.
import { agent } from "@dawn-ai/sdk"
export default agent({
model: "gpt-4o-mini",
systemPrompt: "You are a helpful assistant for the {tenant} organization.",
retry: { maxAttempts: 3, baseDelay: 250 },
})Add state.ts for a route state schema, tools/*.ts for route-local tools, middleware.ts for access control, and run.test.ts for colocated scenarios.
Contributions welcome — see CONTRIBUTING.md. Repo layout and dev commands in CONTRIBUTORS.md. Security: SECURITY.md. Please follow the Code of Conduct.
MIT. See LICENSE.

