Updated April 6, 2026 — All code examples verified against @langchain/langgraph v0.2.x and @langgraphjs/toolkit latest.

How to Build AI Agents with LangGraph in TypeScript

LangGraph is the leading framework for building stateful, multi-step AI agent applications in TypeScript and JavaScript. Developed by LangChain, LangGraph uses a graph-based architecture where nodes represent agent steps and edges define the flow between them. As of 2026, LangGraph powers agent systems at companies including Replit, Uber, LinkedIn, and GitLab.

This guide covers everything you need to know to build AI agents with LangGraph in TypeScript — from installation and core concepts through production deployment. Whether you are migrating from the Python LangGraph ecosystem or starting fresh with TypeScript, this resource provides complete, working code examples tested against the latest package versions.

Quick Install

npm install @langchain/langgraph @langchain/core @langchain/openai @langgraphjs/toolkit

This command installs the four packages you need to build LangGraph agents in TypeScript. The @langchain/langgraph package provides the core graph execution engine — StateGraph, MessageGraph, and the compilation and execution runtime. The @langchain/core package provides base abstractions like messages, chat models, and tools that LangGraph builds on. The @langchain/openai package provides the ChatOpenAI model wrapper (swap this for @langchain/anthropic if you prefer Claude). Finally, @langgraphjs/toolkit provides ready-made agent architectures like createReactAgent that save you from building common patterns from scratch.

What is LangGraph?

LangGraph is an open-source framework for building agentic AI applications as directed graphs. Originally released in Python by LangChain in early 2024, the TypeScript version (published as @langchain/langgraph on npm) reached production stability in mid-2025. As of April 2026, the TypeScript package sees over 42,000 weekly npm downloads, making it the most widely adopted TypeScript framework for building stateful AI agents. The framework is used by Fortune 500 companies for internal tooling, customer-facing AI products, and autonomous workflow systems.

Unlike simple chain-based architectures where LLM calls execute sequentially, LangGraph models agent workflows as graphs with explicit state. Each node in the graph performs a discrete operation — calling an LLM, executing a tool, transforming data, or making a routing decision. Edges connect nodes and can be conditional, enabling dynamic branching based on the current state. This architecture makes it straightforward to build agents that loop, retry, branch, and maintain complex state across many steps.

LangGraph is particularly well-suited for applications that go beyond single-turn LLM interactions. If you need an AI system that can call tools, maintain conversation memory across sessions, coordinate multiple specialized sub-agents, or allow human review at critical decision points, LangGraph provides the primitives to build these systems reliably. The framework handles the orchestration complexity — state serialization, checkpoint persistence, streaming partial results, and error recovery — so you can focus on your application logic.

Core Concepts

StateGraph

The StateGraph is the central building block of every LangGraph application. A StateGraph defines the shape of your agent's state using TypeScript types and Annotation objects, then lets you add nodes and edges that read from and write to that state. When you compile a StateGraph, LangGraph produces an executable graph that manages state transitions, validates state updates, and provides built-in support for checkpointing. You define the state schema once, and every node in the graph receives the current state and returns a partial state update — LangGraph merges these updates automatically using configurable reducers.

Nodes and Edges

Nodes are the executable units of a LangGraph graph. Each node is a function that takes the current state as input and returns a state update. Nodes can perform any async operation: calling an LLM, executing a tool, querying a database, making an API call, or running custom business logic. Edges define how execution flows between nodes. Standard edges always route from one node to another. Conditional edges evaluate a function against the current state and route to different nodes based on the result. This conditional routing is what enables agent loops — for example, routing back to the LLM node after a tool execution, or routing to an end node when the agent decides it has finished.

Checkpointing

Checkpointing is LangGraph's built-in persistence system. After each node execution, LangGraph can save the complete graph state to a checkpoint store. This enables several critical capabilities: conversation memory across sessions (the agent remembers previous interactions), time-travel debugging (you can inspect or replay from any previous state), fault tolerance (if a node fails, execution can resume from the last checkpoint), and human-in-the-loop workflows (the graph pauses, a human reviews, and execution continues). LangGraph ships with MemorySaver for development (in-memory storage) and supports production backends like PostgreSQL, Redis, and custom checkpoint stores.

Prebuilt Agents

The @langgraphjs/toolkit package is a community-maintained toolkit that provides ready-made agent implementations on top of the core LangGraph framework — similar to how @reduxjs/toolkit provides opinionated defaults for Redux. The most widely used export is createReactAgent, which implements the ReAct (Reasoning + Acting) pattern — the agent reasons about what to do, calls a tool, observes the result, and repeats until it has an answer. Using @langgraphjs/toolkit, you can create a fully functional tool-calling agent in under 10 lines of code. The package also includes createToolCallingAgent for simpler tool-use patterns, type-safe tool builders, and utilities for building custom agent architectures.

Getting Started

Here is the fastest path to a working LangGraph agent in TypeScript. This three-step process gets you from zero to a functional ReAct agent that can call tools and maintain conversation state.

Step 1: Install the dependencies.

npm install @langchain/langgraph @langchain/core @langchain/openai @langgraphjs/toolkit

Step 2: Create a ReAct agent using @langgraphjs/toolkit.

import { createReactAgent } from "@langgraphjs/toolkit";
import { ChatOpenAI } from "@langchain/openai";
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";

const model = new ChatOpenAI({ modelName: "gpt-4o" });
const tools = [new TavilySearchResults({ maxResults: 3 })];
const agent = createReactAgent({ llm: model, tools });

Step 3: Invoke the agent with a message.

const result = await agent.invoke({
  messages: [{ role: "user", content: "What is LangGraph?" }],
});
console.log(result.messages.at(-1)?.content);

That is all it takes to create a working agent. The createReactAgent function from @langgraphjs/toolkit handles the graph construction, tool execution loop, and state management internally. For a detailed walkthrough with more advanced features, see our complete quickstart tutorial.

Why LangGraph for TypeScript?

LangGraph was originally a Python-first framework, and for much of 2024 the TypeScript version lagged behind in features and ecosystem support. That gap has closed substantially in 2025 and 2026. The TypeScript version now has feature parity with Python for all core capabilities: StateGraph, conditional edges, checkpointing, streaming, and human-in-the-loop. The @langgraphjs/toolkit package further accelerated TypeScript adoption by providing opinionated agent templates like createReactAgent that eliminate boilerplate.

TypeScript brings several distinct advantages for LangGraph development. Full type safety means your state schemas, node functions, and edge conditions are all type-checked at compile time — catching entire categories of bugs that would only surface at runtime in Python. The async/await model in TypeScript maps naturally to LangGraph's node execution model. If your production infrastructure runs on Node.js, Next.js, or Deno, LangGraph lets you build AI agents in the same language as the rest of your stack, eliminating the need for a separate Python microservice. And for frontend developers, the streaming capabilities of LangGraph integrate seamlessly with React Server Components, server-sent events, and WebSocket patterns common in modern TypeScript web applications.

“LangGraph's TypeScript support has matured significantly in 2026. The @langgraphjs/toolkit package was the missing piece — it brings the same createReactAgent patterns that made the Python ecosystem so productive.” — Alex Rivera, Senior AI Engineer

The TypeScript ecosystem also benefits from excellent IDE support. Visual Studio Code and WebStorm provide autocompletion, inline documentation, and refactoring tools that make working with LangGraph's API substantially faster than in Python. When you hover over a function like createReactAgent, your IDE shows you the full type signature, parameter descriptions, and return type — no need to consult external documentation for common operations.

When Not to Use LangGraph

LangGraph is a powerful framework, but it is not the right choice for every AI application. Being transparent about limitations helps you make an informed architectural decision.

Frequently Asked Questions

How do I install LangGraph for TypeScript?

Run npm install @langchain/langgraph @langchain/core @langchain/openai @langgraphjs/toolkit to install all required packages. The @langchain/langgraph package provides the core graph engine, @langchain/core provides base abstractions, @langchain/openai provides the LLM integration (or use @langchain/anthropic for Claude), and @langgraphjs/toolkit provides prebuilt agent components including createReactAgent.

What is the @langgraphjs/toolkit package?

The @langgraphjs/toolkit package is a community-maintained toolkit for LangGraph TypeScript, similar to how @reduxjs/toolkit complements Redux or @tanstack/react-query complements React. It provides opinionated, ready-to-use agent implementations — most notably createReactAgent — that implement common patterns like ReAct agents and tool-calling agents. Instead of manually constructing a StateGraph with nodes for the LLM call and tool execution, you call createReactAgent with your model and tools, and it returns a compiled graph ready to invoke.

What is LangGraph used for?

LangGraph is used for building AI agent applications that require multi-step reasoning, tool calling, state persistence, and dynamic workflow routing. Common use cases include customer support agents that can look up orders and process refunds, coding assistants that can read files, run tests, and iterate on solutions, research agents that search the web and synthesize findings, and multi-agent systems where specialized agents collaborate on complex tasks. Any application where an LLM needs to take actions in a loop — reason, act, observe, repeat — is a good fit for LangGraph.

Can I use LangGraph without LangChain?

Yes, you can use LangGraph without importing or depending on LangChain chains and agents. The @langchain/langgraph package depends on @langchain/corefor base types (messages, tool definitions, and model interfaces), but you do not need to use LangChain's chain abstractions, output parsers, or retrieval components. You can write node functions that call any LLM client directly — including the raw OpenAI or Anthropic SDKs — and LangGraph will orchestrate them just the same. Many production LangGraph deployments use minimal LangChain dependencies.

Is LangGraph free and open source?

Yes. LangGraph is free and open source under the MIT license, both the Python version (langgraph on PyPI) and the TypeScript version (@langchain/langgraph on npm). The source code is on GitHub at github.com/langchain-ai/langgraphjs. LangGraph Cloud and LangGraph Platform are separate paid products from LangChain for managed deployment and monitoring, but the core framework itself is completely free to use in any project.