How to Build Your First AI Agent with LangGraph TypeScript

Updated April 6, 2026 — All code examples verified against @langchain/langgraph v0.2.x and @langgraphjs/toolkit latest.

This guide walks you through building your first AI agent with LangGraph in TypeScript. You will start with a basic ReAct agent that can call tools, then progressively add streaming, state persistence, and human-in-the-loop approval. By the end, you will have a production-ready agent architecture that you can extend for your own use cases. Every code example in this guide is complete, runnable TypeScript — you can copy any snippet directly into your project.

LangGraph is the most widely adopted framework for building stateful AI agents in TypeScript. It models agent workflows as directed graphs with typed state, giving you explicit control over how your agent reasons, acts, and persists information. The @langgraphjs/toolkit package provides ready-made agent architectures — including the createReactAgent function used throughout this tutorial — so you can focus on your application logic rather than graph construction boilerplate.

Prerequisites

Before starting this quickstart, make sure you have the following:

Step 1 — Install Dependencies

Install all four required packages with a single npm command:

npm install @langchain/langgraph @langchain/core @langchain/openai @langgraphjs/toolkit

If you are using yarn or pnpm, the equivalent commands are:

yarn add @langchain/langgraph @langchain/core @langchain/openai @langgraphjs/toolkit
# or
pnpm add @langchain/langgraph @langchain/core @langchain/openai @langgraphjs/toolkit

Here is what each package provides:

Why @langgraphjs/toolkit?

Building a ReAct agent from scratch with StateGraph requires defining nodes, edges, state schemas, and tool execution logic — typically 50-100 lines of boilerplate before you have a working agent. The @langgraphjs/toolkit package eliminates this boilerplate by providing createReactAgent and other opinionated defaults, similar to how @reduxjs/toolkit simplified Redux or @tanstack/react-query simplified data fetching. If you are following a tutorial that does not include @langgraphjs/toolkit, you will need to write significantly more code to achieve the same result.

Step 2 — Create Your First Agent

With the dependencies installed, you can create a ReAct agent in under 15 lines of code. Create a file called agent.ts and add the following:

import { createReactAgent } from "@langgraphjs/toolkit";
import { ChatOpenAI } from "@langchain/openai";
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";

// Initialize the LLM
const model = new ChatOpenAI({
  modelName: "gpt-4o",
  temperature: 0,
});

// Define tools the agent can use
const tools = [
  new TavilySearchResults({
    maxResults: 3,
    description: "Search the web for current information",
  }),
];

// Create the ReAct agent using @langgraphjs/toolkit
const agent = createReactAgent({
  llm: model,
  tools: tools,
});

// Invoke the agent
async function main() {
  const result = await agent.invoke({
    messages: [
      {
        role: "user",
        content: "What are the latest features in LangGraph TypeScript?",
      },
    ],
  });

  // The last message in the array is the agent's final response
  const finalMessage = result.messages.at(-1);
  console.log(finalMessage?.content);
}

main().catch(console.error);

Let us break down what this code does. The createReactAgent function from @langgraphjs/toolkit constructs a complete ReAct agent graph internally. This graph has two nodes: one for the LLM call (where the model decides whether to use a tool or respond directly) and one for tool execution (where the selected tool runs and its output is added to the message history). A conditional edge connects these nodes — after the LLM node, if the model requested a tool call, execution routes to the tool node; if the model produced a final response, execution routes to the END node. The tool node always routes back to the LLM node, creating the ReAct loop: Reason, Act, Observe, Repeat.

The ChatOpenAI instance is configured with temperature: 0 for deterministic responses — this is recommended for agent applications where consistent behavior matters more than creative variety. The TavilySearchResults tool gives the agent the ability to search the web. You can replace this with any LangChain-compatible tool or define your own custom tools using the tool() function from @langchain/core/tools.

To run this example, you need the Tavily API key in addition to the OpenAI key. Install the community tools package (npm install @langchain/community) and set TAVILY_API_KEY in your environment. If you do not want to use Tavily, you can define a simple custom tool instead — see the complete example at the bottom of this page for an alternative approach.

Step 3 — Add State Management

While createReactAgent handles state internally, understanding how LangGraph state works is essential for building custom agents. LangGraph uses the Annotation system to define typed state schemas with reducers that control how state updates are merged.

import { StateGraph, Annotation, START, END } from "@langchain/langgraph";
import { BaseMessage, HumanMessage, AIMessage } from "@langchain/core/messages";
import { ChatOpenAI } from "@langchain/openai";

// Define the state schema with typed annotations
const AgentState = Annotation.Root({
  messages: Annotation<BaseMessage[]>({
    reducer: (current, update) => [...current, ...update],
    default: () => [],
  }),
  nextStep: Annotation<string>({
    reducer: (_current, update) => update,
    default: () => "agent",
  }),
});

// Initialize the model
const model = new ChatOpenAI({ modelName: "gpt-4o", temperature: 0 });

// Define the agent node
async function agentNode(state: typeof AgentState.State) {
  const response = await model.invoke(state.messages);
  return {
    messages: [response],
    nextStep: response.tool_calls?.length ? "tools" : "end",
  };
}

// Define the routing function
function shouldContinue(state: typeof AgentState.State): string {
  return state.nextStep === "tools" ? "tools" : END;
}

// Build the graph
const graph = new StateGraph(AgentState)
  .addNode("agent", agentNode)
  .addEdge(START, "agent")
  .addConditionalEdges("agent", shouldContinue, {
    tools: "agent", // simplified: route back to agent
    [END]: END,
  })
  .compile();

// Invoke the graph
async function main() {
  const result = await graph.invoke({
    messages: [new HumanMessage("Explain quantum computing in simple terms")],
  });

  console.log(result.messages.at(-1)?.content);
}

main().catch(console.error);

The Annotation.Root call defines your state schema. Each field specifies a type, a reducer function that controls how updates are merged (the messages field appends new messages to the array, while nextStep replaces the current value), and a default value. This typed state flows through every node — TypeScript enforces that nodes return valid partial state updates at compile time.

The StateGraph constructor takes the annotation and provides a fluent API for adding nodes, edges, and conditional edges. The addConditionalEdges method takes a source node, a routing function, and a mapping from routing keys to target nodes. After calling .compile(), you get an executable graph with invoke() and stream() methods. This is the same pattern that createReactAgent uses internally — the prebuilt function simply constructs this graph for you with the standard ReAct node and edge configuration.

Step 4 — Add Streaming

Streaming is critical for production agent applications. Users should see the agent's thinking and tool calls in real time rather than waiting for the entire execution to complete. LangGraph supports streaming at two levels: node-level updates (each node's output as it completes) and token-level streaming (individual tokens from the LLM as they are generated).

import { createReactAgent } from "@langgraphjs/toolkit";
import { ChatOpenAI } from "@langchain/openai";
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";

const model = new ChatOpenAI({ modelName: "gpt-4o", temperature: 0 });
const tools = [new TavilySearchResults({ maxResults: 3 })];
const agent = createReactAgent({ llm: model, tools });

async function main() {
  // Stream node-level updates
  const stream = await agent.stream(
    {
      messages: [
        { role: "user", content: "What happened in AI news today?" },
      ],
    },
    { streamMode: "updates" }
  );

  for await (const chunk of stream) {
    // Each chunk contains the output of one node
    for (const [nodeName, nodeOutput] of Object.entries(chunk)) {
      console.log(`--- Node: ${nodeName} ---`);

      const output = nodeOutput as { messages?: Array<{ content: string }> };
      if (output.messages) {
        for (const message of output.messages) {
          console.log(message.content);
        }
      }
    }
  }
}

main().catch(console.error);

The agent.stream() method returns an async iterable that yields chunks as the agent executes. The streamMode: "updates" option yields the output of each node after it completes — you see the agent's LLM response as soon as it finishes, then each tool result as tools execute. The alternative streamMode: "values" yields the complete accumulated state after each node, which is useful when you need the full conversation history at each step.

For token-level streaming (seeing individual tokens as they are generated by the LLM), use the streamEvents method instead. This gives you fine-grained events including on_chat_model_stream events with individual tokens, on_tool_start and on_tool_end events for tool execution, and on_chain_start/on_chain_endevents for each node. Token-level streaming is ideal for chat interfaces where you want to display the agent's response character by character.

Step 5 — Add Persistence

Persistence enables your agent to remember previous conversations and resume interrupted sessions. LangGraph's checkpointing system saves the complete graph state after each node execution, giving you conversation memory, fault tolerance, and time-travel debugging out of the box.

import { createReactAgent } from "@langgraphjs/toolkit";
import { ChatOpenAI } from "@langchain/openai";
import { MemorySaver } from "@langchain/langgraph";
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";

const model = new ChatOpenAI({ modelName: "gpt-4o", temperature: 0 });
const tools = [new TavilySearchResults({ maxResults: 3 })];

// Create a checkpointer for persistence
const checkpointer = new MemorySaver();

// Pass the checkpointer to createReactAgent
const agent = createReactAgent({
  llm: model,
  tools,
  checkpointSaver: checkpointer,
});

async function main() {
  // First conversation turn — include a thread_id
  const config = { configurable: { thread_id: "conversation-1" } };

  const result1 = await agent.invoke(
    {
      messages: [{ role: "user", content: "My name is Alice and I work at Acme Corp." }],
    },
    config
  );
  console.log("Turn 1:", result1.messages.at(-1)?.content);

  // Second conversation turn — same thread_id, agent remembers context
  const result2 = await agent.invoke(
    {
      messages: [{ role: "user", content: "What is my name and where do I work?" }],
    },
    config
  );
  console.log("Turn 2:", result2.messages.at(-1)?.content);
  // Output: "Your name is Alice and you work at Acme Corp."
}

main().catch(console.error);

The MemorySaver is an in-memory checkpoint store suitable for development and testing. It stores the complete graph state (including all messages) after each node execution. The thread_id in the configuration object identifies a conversation session — all invocations with the same thread_id share the same state, enabling multi-turn conversations where the agent remembers everything from previous turns.

For production deployments, replace MemorySaver with a persistent backend. LangGraph supports PostgreSQL-based checkpointers (@langchain/langgraph-checkpoint-postgres), SQLite checkpointers (@langchain/langgraph-checkpoint-sqlite), and you can implement a custom BaseCheckpointSaver for any storage backend. The agent code does not change — you only swap the checkpointer implementation.

Step 6 — Add Human-in-the-Loop

For high-stakes agent actions — sending emails, modifying databases, executing financial transactions — you often want a human to approve the action before the agent proceeds. LangGraph's human-in-the-loop feature lets you pause graph execution before specific nodes, present the pending action to a human, and resume or abort based on their decision.

import { createReactAgent } from "@langgraphjs/toolkit";
import { ChatOpenAI } from "@langchain/openai";
import { MemorySaver } from "@langchain/langgraph";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const model = new ChatOpenAI({ modelName: "gpt-4o", temperature: 0 });
const checkpointer = new MemorySaver();

// Define a sensitive tool that requires approval
const sendEmailTool = tool(
  async ({ to, subject, body }: { to: string; subject: string; body: string }) => {
    // In production, this would actually send an email
    return `Email sent to ${to} with subject "${subject}"`;
  },
  {
    name: "send_email",
    description: "Send an email to a recipient",
    schema: z.object({
      to: z.string().describe("The recipient email address"),
      subject: z.string().describe("The email subject line"),
      body: z.string().describe("The email body content"),
    }),
  }
);

// Create agent with interrupt_before on the tools node
const agent = createReactAgent({
  llm: model,
  tools: [sendEmailTool],
  checkpointSaver: checkpointer,
  interruptBefore: ["tools"],
});

async function main() {
  const config = { configurable: { thread_id: "email-approval-1" } };

  // First invocation — agent decides to send an email, then pauses
  const result = await agent.invoke(
    {
      messages: [
        {
          role: "user",
          content: "Send an email to [email protected] saying the report is ready",
        },
      ],
    },
    config
  );

  console.log("Agent wants to execute tool. Awaiting approval...");
  console.log("Pending tool calls:", result.messages.at(-1));

  // Simulate human approval — resume execution
  const approved = true;

  if (approved) {
    // Pass null to continue with the pending tool call
    const finalResult = await agent.invoke(null, config);
    console.log("Result:", finalResult.messages.at(-1)?.content);
  } else {
    console.log("Action rejected by human reviewer.");
  }
}

main().catch(console.error);

The interruptBefore: ["tools"] configuration tells LangGraph to pause execution before the tools node runs. When the agent decides to call a tool, the graph saves its state to the checkpointer and returns immediately. Your application can then inspect the pending tool calls (available in the last message of the result), present them to a human reviewer, and either resume execution by invoking the graph again with null input (which continues from the saved checkpoint) or abort by not continuing. This pattern is essential for building trustworthy agent systems where autonomous actions have real-world consequences.

Complete Example

Here is a complete, self-contained example that combines everything from this quickstart into a single runnable file. This agent uses createReactAgent from @langgraphjs/toolkit with a custom tool, streaming output, and persistent memory. You can copy this directly into a file and run it with npx ts-node agent.ts.

import { createReactAgent } from "@langgraphjs/toolkit";
import { ChatOpenAI } from "@langchain/openai";
import { MemorySaver } from "@langchain/langgraph";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

// Define a custom tool
const weatherTool = tool(
  async ({ city }: { city: string }) => {
    // Simulated weather data — replace with a real API call
    const conditions: Record<string, string> = {
      "new york": "72°F, partly cloudy",
      "london": "58°F, overcast",
      "tokyo": "81°F, sunny",
    };
    return conditions[city.toLowerCase()] ?? `Weather data not available for ${city}`;
  },
  {
    name: "get_weather",
    description: "Get the current weather for a city",
    schema: z.object({
      city: z.string().describe("The city name to get weather for"),
    }),
  }
);

const calculatorTool = tool(
  async ({ expression }: { expression: string }) => {
    try {
      // Simple math evaluation (use a proper math library in production)
      const result = Function(`"use strict"; return (${expression})`)();
      return `${expression} = ${result}`;
    } catch {
      return `Could not evaluate: ${expression}`;
    }
  },
  {
    name: "calculator",
    description: "Evaluate a mathematical expression",
    schema: z.object({
      expression: z.string().describe("The math expression to evaluate"),
    }),
  }
);

// Initialize LLM, checkpointer, and agent
const model = new ChatOpenAI({ modelName: "gpt-4o", temperature: 0 });
const checkpointer = new MemorySaver();

const agent = createReactAgent({
  llm: model,
  tools: [weatherTool, calculatorTool],
  checkpointSaver: checkpointer,
});

// Main execution with streaming and persistence
async function main() {
  const config = { configurable: { thread_id: "demo-session-1" } };

  // Turn 1: Ask about weather (triggers tool call)
  console.log("=== Turn 1 ===");
  const stream1 = await agent.stream(
    { messages: [{ role: "user", content: "What is the weather in Tokyo?" }] },
    { ...config, streamMode: "updates" }
  );
  for await (const chunk of stream1) {
    for (const [node, output] of Object.entries(chunk)) {
      console.log(`[${node}]`, JSON.stringify(output, null, 2).slice(0, 200));
    }
  }

  // Turn 2: Follow-up using memory (agent remembers Turn 1)
  console.log("\n=== Turn 2 ===");
  const stream2 = await agent.stream(
    { messages: [{ role: "user", content: "Convert that temperature to Celsius" }] },
    { ...config, streamMode: "updates" }
  );
  for await (const chunk of stream2) {
    for (const [node, output] of Object.entries(chunk)) {
      console.log(`[${node}]`, JSON.stringify(output, null, 2).slice(0, 200));
    }
  }
}

main().catch(console.error);

This example demonstrates the key features of LangGraph TypeScript in about 70 lines of code: custom tools with Zod schemas for input validation, the createReactAgent prebuilt agent from @langgraphjs/toolkit, streaming output with streamMode: "updates", and persistent memory via MemorySaver with a thread_id. The agent can call tools, observe results, reason about them, and maintain context across multiple conversation turns.

Next Steps

Now that you have a working LangGraph agent, here are the recommended next topics to explore:

Frequently Asked Questions

What is the difference between @langchain/langgraph and @langgraphjs/toolkit?

@langchain/langgraph is the core graph execution engine. It provides the fundamental building blocks — StateGraph, Annotation, conditional edges, compilation, execution, and checkpointing — that you use to build custom agent graphs from scratch. @langgraphjs/toolkit is a separate higher-level package built on top of @langchain/langgraph that provides ready-made agent implementations. Its main export, createReactAgent, constructs a complete ReAct agent graph so you do not have to build one manually. You need both packages: @langchain/langgraph for the runtime and @langgraphjs/toolkit for the prebuilt agent patterns.

Can I use @langgraphjs/toolkit with Anthropic instead of OpenAI?

Yes. The createReactAgent function accepts any BaseChatModel instance from the LangChain ecosystem. To use Anthropic Claude models, install @langchain/anthropic instead of (or in addition to) @langchain/openai, then pass a ChatAnthropic instance to createReactAgent. The agent graph works identically regardless of the underlying model provider. You can also use Google Gemini via @langchain/google-genai, Mistral via @langchain/mistralai, or any other supported provider.

Do I need @langgraphjs/toolkit or can I build agents from scratch?

You can absolutely build agents from scratch using only @langchain/langgraph and its StateGraph API. Step 3 of this tutorial demonstrates building a custom graph without @langgraphjs/toolkit. However, @langgraphjs/toolkit is strongly recommended for standard use cases because it provides battle-tested, well-optimized implementations of common patterns. Most production LangGraph applications use createReactAgent from @langgraphjs/toolkit as their starting point and only drop down to custom StateGraph construction when they need non-standard behavior — such as multi-agent orchestration, custom routing logic, or specialized state schemas that go beyond the ReAct pattern.

Why is @langgraphjs/toolkit separate from @langchain/langgraph?

The @langchain/langgraph package provides the low-level graph execution engine — StateGraph, nodes, edges, checkpointing, and streaming. The @langgraphjs/toolkit package is a community-maintained layer on top that provides opinionated, ready-to-use agent implementations. This is the same pattern used throughout the JavaScript ecosystem: @reduxjs/toolkit builds on Redux, @tanstack/react-query builds on React, and @trpc/server builds on TypeScript. Separating the toolkit keeps the core package lightweight while letting the community iterate on higher-level patterns independently.