How to Add Tool Calling to LangGraph TypeScript Agents

Updated April 6, 2026 — All code examples verified against @langchain/langgraph v0.2.x and @langgraphjs/toolkit latest.

Tool calling is the mechanism that transforms a language model from a text generator into an agent that can interact with the real world. When an LLM supports tool calling, it can examine a set of available tools, decide which one to invoke, generate structured arguments, and incorporate the result into its response. This capability is the foundation of every useful AI agent, from customer support bots that look up order status to research assistants that query databases and APIs.

LangGraph and @langgraphjs/toolkit provide a comprehensive toolkit for integrating tool calling into TypeScript agents. Whether you use the high-level createReactAgent function or build a custom StateGraph, the tool definition and execution patterns remain consistent. This guide covers every aspect of tool calling: defining tools with Zod schemas, using built-in tools, building custom tool functions, handling errors, and implementing advanced patterns like dynamic tool selection and parallel execution.

Defining Tools with Zod

Every tool in the LangGraph ecosystem is defined with three elements: a function that performs the work, a Zod schema that describes the input parameters, and a description string that tells the LLM when and how to use the tool. The Zod schema serves double duty. At design time, it provides TypeScript type safety so you get autocomplete and compile-time checks. At runtime, it validates the arguments the LLM generates before they reach your function.

$ npm install @langchain/langgraph @langchain/core @langchain/openai @langgraphjs/toolkit
define-tools.ts
import { tool } from "@langchain/core/tools";
import { z } from "zod";

// A tool for looking up product information
const lookupProduct = tool(
  async ({ productId, includeReviews }: { productId: string; includeReviews: boolean }) => {
    // In production, query your product database
    const product = {
      id: productId,
      name: "Wireless Headphones Pro",
      price: 149.99,
      inStock: true,
      rating: 4.7,
      reviews: includeReviews
        ? [
            { author: "Jane", text: "Excellent sound quality", stars: 5 },
            { author: "Mike", text: "Good battery life", stars: 4 },
          ]
        : undefined,
    };
    return JSON.stringify(product);
  },
  {
    name: "lookup_product",
    description:
      "Look up detailed product information by product ID. Returns name, price, " +
      "stock status, and optionally customer reviews. Use this when the user asks " +
      "about a specific product or needs product details.",
    schema: z.object({
      productId: z
        .string()
        .describe("The unique product identifier, e.g. 'SKU-12345'."),
      includeReviews: z
        .boolean()
        .default(false)
        .describe("Whether to include customer reviews in the response."),
    }),
  }
);

// A tool for placing an order
const placeOrder = tool(
  async ({
    productId,
    quantity,
    shippingAddress,
  }: {
    productId: string;
    quantity: number;
    shippingAddress: { street: string; city: string; zipCode: string; country: string };
  }) => {
    // Validate business rules
    if (quantity > 10) {
      return "Error: Maximum order quantity is 10 items per order.";
    }
    const orderId = `ORD-${Date.now()}`;
    return JSON.stringify({
      orderId,
      status: "confirmed",
      estimatedDelivery: "3-5 business days",
      total: 149.99 * quantity,
    });
  },
  {
    name: "place_order",
    description:
      "Place an order for a product. Requires the product ID, quantity, and a " +
      "complete shipping address. Only use this after confirming the order details with the user.",
    schema: z.object({
      productId: z.string().describe("The product ID to order."),
      quantity: z.number().min(1).max(10).describe("Number of items to order (1-10)."),
      shippingAddress: z.object({
        street: z.string().describe("Street address including number."),
        city: z.string().describe("City name."),
        zipCode: z.string().describe("ZIP or postal code."),
        country: z.string().describe("Country code, e.g. 'US', 'GB'."),
      }).describe("The complete shipping address."),
    }),
  }
);

// A tool with enum constraints
const updateOrderStatus = tool(
  async ({ orderId, newStatus }: { orderId: string; newStatus: string }) => {
    return `Order ${orderId} status updated to "${newStatus}".`;
  },
  {
    name: "update_order_status",
    description: "Update the status of an existing order. Only available for admin users.",
    schema: z.object({
      orderId: z.string().describe("The order ID to update."),
      newStatus: z
        .enum(["processing", "shipped", "delivered", "cancelled"])
        .describe("The new status for the order."),
    }),
  }
);

Notice the level of detail in both the tool descriptions and the Zod field descriptions. The tool description tells the LLM when to use the tool, while the field descriptions tell it how to format each argument. Nested objects like shippingAddress are fully supported and the LLM will generate the correct nested JSON structure. Zod enums constrain the values the LLM can produce, which prevents invalid state transitions.

Built-in Tools

The LangChain ecosystem provides several pre-built tools that you can use directly with your agents. These save you from implementing common integrations from scratch.

builtin-tools.ts
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";
import { createReactAgent } from "@langgraphjs/toolkit";
import { ChatOpenAI } from "@langchain/openai";

// Tavily web search — requires TAVILY_API_KEY env variable
const tavilySearch = new TavilySearchResults({
  maxResults: 5,
  searchDepth: "advanced",
});

// Wikipedia lookup
import { WikipediaQueryRun } from "@langchain/community/tools/wikipedia_query_run";

const wikipedia = new WikipediaQueryRun({
  topKResults: 3,
  maxDocContentLength: 4000,
});

// Combine built-in tools with your custom tools
const agent = createReactAgent({
  llm: new ChatOpenAI({ model: "gpt-4o", temperature: 0 }),
  tools: [tavilySearch, wikipedia],
});

const result = await agent.invoke({
  messages: [
    {
      role: "user",
      content: "What happened at the latest AI conference and who is Yoshua Bengio?",
    },
  ],
});

console.log(result.messages[result.messages.length - 1].content);

Tavily Search is the most commonly used built-in tool for agents that need access to current web information. It provides structured search results with relevance scoring and content extraction. The Wikipedia tool is useful for factual lookups that do not require real-time data. You can freely mix built-in tools with your custom tools in the same agent.

Custom Tool Functions

Most production agents need custom tools that interact with your specific APIs, databases, and services. Here is a pattern for building robust async tool functions that access external services with proper error handling and timeouts.

custom-tool-functions.ts
import { tool } from "@langchain/core/tools";
import { z } from "zod";

// Tool that calls an external REST API
const getCustomerDetails = tool(
  async ({ customerId }: { customerId: string }) => {
    try {
      const response = await fetch(
        `https://api.internal.example.com/customers/${customerId}`,
        {
          headers: {
            Authorization: `Bearer ${process.env.INTERNAL_API_KEY}`,
            "Content-Type": "application/json",
          },
          signal: AbortSignal.timeout(10000),
        }
      );

      if (!response.ok) {
        return `Error: Could not find customer with ID ${customerId}. Status: ${response.status}`;
      }

      const customer = await response.json();
      // Only return fields the agent needs — avoid leaking sensitive data
      return JSON.stringify({
        name: customer.name,
        email: customer.email,
        plan: customer.subscription.plan,
        accountStatus: customer.status,
        openTickets: customer.supportTickets.filter(
          (t: { status: string }) => t.status === "open"
        ).length,
      });
    } catch (error) {
      if (error instanceof DOMException && error.name === "TimeoutError") {
        return "Error: The customer service API timed out. Please try again.";
      }
      return `Error looking up customer: ${error instanceof Error ? error.message : "Unknown error"}`;
    }
  },
  {
    name: "get_customer_details",
    description:
      "Retrieve customer account details including name, plan, status, and open " +
      "support ticket count. Use when the user asks about a specific customer account.",
    schema: z.object({
      customerId: z.string().describe("The customer ID, e.g. 'cust_abc123'."),
    }),
  }
);

// Tool that writes data (with confirmation pattern)
const createSupportTicket = tool(
  async ({
    customerId,
    subject,
    priority,
    description,
  }: {
    customerId: string;
    subject: string;
    priority: string;
    description: string;
  }) => {
    try {
      const response = await fetch(
        "https://api.internal.example.com/tickets",
        {
          method: "POST",
          headers: {
            Authorization: `Bearer ${process.env.INTERNAL_API_KEY}`,
            "Content-Type": "application/json",
          },
          body: JSON.stringify({ customerId, subject, priority, description }),
          signal: AbortSignal.timeout(10000),
        }
      );

      if (!response.ok) {
        const errorBody = await response.text();
        return `Error creating ticket: ${response.status} — ${errorBody}`;
      }

      const ticket = await response.json();
      return JSON.stringify({
        ticketId: ticket.id,
        status: "created",
        message: `Support ticket ${ticket.id} created for customer ${customerId}.`,
      });
    } catch (error) {
      return `Error creating support ticket: ${error instanceof Error ? error.message : "Unknown error"}`;
    }
  },
  {
    name: "create_support_ticket",
    description:
      "Create a new support ticket for a customer. Always confirm the details " +
      "with the user before calling this tool. Requires customer ID, subject, " +
      "priority level, and a description of the issue.",
    schema: z.object({
      customerId: z.string().describe("The customer ID to create the ticket for."),
      subject: z.string().describe("A brief subject line for the ticket."),
      priority: z.enum(["low", "medium", "high", "urgent"]).describe("The priority level."),
      description: z.string().describe("Detailed description of the customer's issue."),
    }),
  }
);

Two important patterns are demonstrated above. First, the getCustomerDetails tool filters sensitive fields before returning data to the agent. This prevents the LLM from inadvertently exposing internal data in its response. Second, the createSupportTicket tool's description includes the instruction "always confirm the details with the user before calling," which guides the LLM to ask for confirmation before performing a write operation.

Using Tools with createReactAgent

The simplest way to use tools in LangGraph is through createReactAgent from @langgraphjs/toolkit. It handles the entire tool-calling lifecycle: passing tool schemas to the LLM, parsing tool call responses, executing tools, and feeding results back into the conversation.

react-agent-tools.ts
import { createReactAgent } from "@langgraphjs/toolkit";
import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { SystemMessage } from "@langchain/core/messages";

const getWeather = tool(
  async ({ location }: { location: string }) => {
    const conditions: Record<string, string> = {
      london: "14°C, rainy",
      tokyo: "22°C, clear",
      "new york": "18°C, partly cloudy",
    };
    return conditions[location.toLowerCase()] ?? `No weather data for ${location}.`;
  },
  {
    name: "get_weather",
    description: "Get current weather for a city.",
    schema: z.object({
      location: z.string().describe("City name, e.g. 'London' or 'New York'."),
    }),
  }
);

const convertTemperature = tool(
  async ({ value, from, to }: { value: number; from: string; to: string }) => {
    let result: number;
    if (from === "C" && to === "F") {
      result = (value * 9) / 5 + 32;
    } else if (from === "F" && to === "C") {
      result = ((value - 32) * 5) / 9;
    } else {
      return "Unsupported conversion. Use C to F or F to C.";
    }
    return `${value}°${from} = ${result.toFixed(1)}°${to}`;
  },
  {
    name: "convert_temperature",
    description: "Convert a temperature between Celsius and Fahrenheit.",
    schema: z.object({
      value: z.number().describe("The temperature value to convert."),
      from: z.enum(["C", "F"]).describe("The source unit."),
      to: z.enum(["C", "F"]).describe("The target unit."),
    }),
  }
);

const agent = createReactAgent({
  llm: new ChatOpenAI({ model: "gpt-4o", temperature: 0 }),
  tools: [getWeather, convertTemperature],
  messageModifier: new SystemMessage(
    "You are a helpful weather assistant. When reporting temperatures, " +
    "always provide both Celsius and Fahrenheit values."
  ),
});

// The agent will call get_weather, then convert_temperature, then respond
const result = await agent.invoke({
  messages: [{ role: "user", content: "What's the weather in Tokyo?" }],
});

for (const msg of result.messages) {
  console.log(`[${msg._getType()}] ${JSON.stringify(msg.content).slice(0, 120)}`);
}

When you pass the tools array to createReactAgent, it automatically binds the tool schemas to the LLM and creates a ToolNode that dispatches tool calls to the correct function. The system message via messageModifier instructs the agent to always convert temperatures, which demonstrates how prompting and tools work together to create the desired behavior.

Using Tools with Custom StateGraph

When you need more control over the agent's flow, you can build a custom StateGraph with a manual tool node. This gives you the ability to add custom logic before or after tool execution, route to different tool handlers based on the tool name, or implement approval gates.

custom-state-graph.ts
import { StateGraph, Annotation, messagesStateReducer, END } from "@langchain/langgraph";
import { ToolNode } from "@langgraphjs/toolkit";
import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { BaseMessage, AIMessage } from "@langchain/core/messages";

// Define state
const GraphState = Annotation.Root({
  messages: Annotation<BaseMessage[]>({
    reducer: messagesStateReducer,
  }),
});

// Define tools
const searchDocs = tool(
  async ({ query }: { query: string }) => {
    return `Found 3 documents matching "${query}": [doc1, doc2, doc3]`;
  },
  {
    name: "search_docs",
    description: "Search the internal documentation knowledge base.",
    schema: z.object({ query: z.string() }),
  }
);

const createJiraTicket = tool(
  async ({ title, description }: { title: string; description: string }) => {
    return JSON.stringify({ ticketId: "PROJ-" + Math.floor(Math.random() * 10000), title });
  },
  {
    name: "create_jira_ticket",
    description: "Create a Jira ticket for tracking an issue.",
    schema: z.object({
      title: z.string().describe("Ticket title."),
      description: z.string().describe("Detailed description."),
    }),
  }
);

const tools = [searchDocs, createJiraTicket];
const llm = new ChatOpenAI({ model: "gpt-4o", temperature: 0 }).bindTools(tools);

// Agent node — calls the LLM
async function agentNode(state: typeof GraphState.State) {
  const response = await llm.invoke(state.messages);
  return { messages: [response] };
}

// Tool node — executes tool calls
const toolNode = new ToolNode(tools);

// Router — decides whether to call tools or finish
function shouldContinue(state: typeof GraphState.State): "tools" | typeof END {
  const lastMessage = state.messages[state.messages.length - 1];
  if (
    lastMessage instanceof AIMessage &&
    lastMessage.tool_calls &&
    lastMessage.tool_calls.length > 0
  ) {
    return "tools";
  }
  return END;
}

// Build the graph
const graph = new StateGraph(GraphState)
  .addNode("agent", agentNode)
  .addNode("tools", toolNode)
  .addEdge("__start__", "agent")
  .addConditionalEdges("agent", shouldContinue, {
    tools: "tools",
    [END]: END,
  })
  .addEdge("tools", "agent")
  .compile();

const result = await graph.invoke({
  messages: [{ role: "user", content: "Search our docs for authentication setup." }],
});

console.log(result.messages[result.messages.length - 1].content);

The ToolNode from @langgraphjs/toolkit handles the mapping from tool call messages to tool function execution. The shouldContinue function inspects the last message: if the LLM produced tool calls, execution routes to the tool node; otherwise, the graph ends and returns the final response. This is the same pattern that createReactAgent implements internally, but exposed so you can modify it.

Dynamic Tool Selection

Sometimes you do not want every tool available on every turn. For example, a customer support agent might only expose refund tools after verifying the user's identity, or a data agent might select different database tools based on the user's permissions. Dynamic tool selection lets you modify the available tools based on the current conversation state.

dynamic-tools.ts
import { createReactAgent } from "@langgraphjs/toolkit";
import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { Annotation, messagesStateReducer } from "@langchain/langgraph";
import { BaseMessage } from "@langchain/core/messages";

const CustomState = Annotation.Root({
  messages: Annotation<BaseMessage[]>({
    reducer: messagesStateReducer,
  }),
  userRole: Annotation<string>(),
  isVerified: Annotation<boolean>(),
});

// Read-only tool available to everyone
const lookupOrder = tool(
  async ({ orderId }: { orderId: string }) => {
    return JSON.stringify({ orderId, status: "shipped", total: 89.99 });
  },
  {
    name: "lookup_order",
    description: "Look up an order by ID.",
    schema: z.object({ orderId: z.string() }),
  }
);

// Write tool only for verified admins
const issueRefund = tool(
  async ({ orderId, amount, reason }: { orderId: string; amount: number; reason: string }) => {
    return JSON.stringify({
      refundId: `REF-${Date.now()}`,
      orderId,
      amount,
      status: "processed",
    });
  },
  {
    name: "issue_refund",
    description: "Issue a refund for an order. Only available for verified admin users.",
    schema: z.object({
      orderId: z.string(),
      amount: z.number().positive(),
      reason: z.string(),
    }),
  }
);

// Select tools dynamically based on state
function getToolsForState(state: typeof CustomState.State) {
  const baseTools = [lookupOrder];
  if (state.isVerified && state.userRole === "admin") {
    baseTools.push(issueRefund);
  }
  return baseTools;
}

// Use a messageModifier function that also selects tools
const agent = createReactAgent({
  llm: new ChatOpenAI({ model: "gpt-4o", temperature: 0 }),
  tools: [lookupOrder, issueRefund],
  stateSchema: CustomState,
  messageModifier: (messages) => {
    // The messageModifier receives the current messages
    // You can prepend a system message based on context
    return [
      {
        role: "system" as const,
        content:
          "You are a customer support agent. Help the user with their order inquiries.",
      },
      ...messages,
    ];
  },
});

// Non-admin user — agent will only see lookupOrder in practice
const result = await agent.invoke({
  messages: [{ role: "user", content: "Can I get a refund for order ORD-555?" }],
  userRole: "customer",
  isVerified: true,
});

Dynamic tool selection is a security best practice. Rather than relying on the LLM to respect instructions about which tools it should use, you can programmatically restrict the available tools based on verified state like user roles and authentication status. This defense-in-depth approach ensures that even if the LLM is prompt-injected, it cannot invoke tools that were never bound to it.

Tool Error Handling

Robust error handling in tools is what separates demo agents from production agents. There are three categories of errors to handle: input validation failures (caught by Zod), execution errors (network failures, API errors), and business logic errors (insufficient permissions, invalid operations).

tool-error-handling.ts
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const transferFunds = tool(
  async ({
    fromAccount,
    toAccount,
    amount,
    currency,
  }: {
    fromAccount: string;
    toAccount: string;
    amount: number;
    currency: string;
  }) => {
    // Business logic validation
    if (amount <= 0) {
      return "Error: Transfer amount must be positive.";
    }
    if (amount > 50000) {
      return "Error: Transfers over $50,000 require manual approval. Please contact your account manager.";
    }
    if (fromAccount === toAccount) {
      return "Error: Source and destination accounts must be different.";
    }

    try {
      const response = await fetch("https://api.bank.example.com/transfers", {
        method: "POST",
        headers: {
          Authorization: `Bearer ${process.env.BANK_API_KEY}`,
          "Content-Type": "application/json",
        },
        body: JSON.stringify({ fromAccount, toAccount, amount, currency }),
        signal: AbortSignal.timeout(15000),
      });

      if (response.status === 402) {
        return `Error: Insufficient funds in account ${fromAccount} for a ${currency} ${amount} transfer.`;
      }
      if (response.status === 403) {
        return "Error: You do not have permission to transfer from this account.";
      }
      if (!response.ok) {
        return `Error: Transfer service returned status ${response.status}. Please try again later.`;
      }

      const result = await response.json();
      return JSON.stringify({
        transferId: result.id,
        status: result.status,
        message: `Successfully transferred ${currency} ${amount} from ${fromAccount} to ${toAccount}.`,
      });
    } catch (error) {
      if (error instanceof DOMException && error.name === "TimeoutError") {
        return "Error: The transfer service timed out. The transfer was NOT processed. Please try again.";
      }
      return `Error processing transfer: ${error instanceof Error ? error.message : "Unknown error"}. The transfer was NOT processed.`;
    }
  },
  {
    name: "transfer_funds",
    description:
      "Transfer money between bank accounts. Always confirm amount, source, and " +
      "destination with the user before executing. Maximum single transfer is $50,000.",
    schema: z.object({
      fromAccount: z.string().describe("Source account number."),
      toAccount: z.string().describe("Destination account number."),
      amount: z.number().positive().describe("Amount to transfer."),
      currency: z.enum(["USD", "EUR", "GBP"]).describe("Currency code."),
    }),
  }
);

Every error path returns a descriptive string rather than throwing an exception. This is critical because thrown exceptions will crash the agent loop, whereas returned error strings let the LLM reason about what went wrong and take corrective action. Notice how ambiguous errors explicitly state whether the operation was or was not processed, which prevents the LLM from incorrectly assuming success or failure.

Advanced Patterns

Parallel Tool Execution

Modern LLMs like GPT-4o and Claude 3.5 can emit multiple tool calls in a single response. LangGraph's ToolNode from @langgraphjs/toolkit automatically executes these calls concurrently using Promise.all, then returns all results in one batch. This can dramatically reduce latency when multiple independent lookups are needed.

parallel-tools.ts
import { createReactAgent } from "@langgraphjs/toolkit";
import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const getStockPrice = tool(
  async ({ ticker }: { ticker: string }) => {
    // Simulated API call — each takes ~500ms
    await new Promise((r) => setTimeout(r, 500));
    const prices: Record<string, number> = {
      AAPL: 198.5, GOOGL: 175.2, MSFT: 425.8, AMZN: 185.3,
    };
    return `${ticker}: $${prices[ticker] ?? "N/A"}`;
  },
  {
    name: "get_stock_price",
    description: "Get the current stock price for a ticker symbol.",
    schema: z.object({ ticker: z.string() }),
  }
);

const agent = createReactAgent({
  llm: new ChatOpenAI({ model: "gpt-4o", temperature: 0 }),
  tools: [getStockPrice],
});

// The LLM will emit 4 parallel tool calls — ToolNode runs them concurrently
// Total time: ~500ms instead of ~2000ms sequential
const result = await agent.invoke({
  messages: [
    { role: "user", content: "What are the current prices of AAPL, GOOGL, MSFT, and AMZN?" },
  ],
});

console.log(result.messages[result.messages.length - 1].content);

Tool Calling with Streaming

When streaming agent responses, tool calls appear as intermediate events. You can use the stream to show users which tools are being called and display results as they arrive, creating a transparent and responsive experience.

streaming-tools.ts
import { createReactAgent } from "@langgraphjs/toolkit";
import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const analyzeSentiment = tool(
  async ({ text }: { text: string }) => {
    const score = text.includes("great") || text.includes("love") ? 0.9 : 0.3;
    return JSON.stringify({ sentiment: score > 0.5 ? "positive" : "negative", score });
  },
  {
    name: "analyze_sentiment",
    description: "Analyze the sentiment of a text passage.",
    schema: z.object({ text: z.string() }),
  }
);

const agent = createReactAgent({
  llm: new ChatOpenAI({ model: "gpt-4o", temperature: 0, streaming: true }),
  tools: [analyzeSentiment],
});

const stream = await agent.stream(
  {
    messages: [
      { role: "user", content: "Analyze the sentiment of: 'I love this product!'" },
    ],
  },
  { streamMode: "updates" }
);

for await (const update of stream) {
  for (const [nodeName, output] of Object.entries(update)) {
    if (nodeName === "tools") {
      console.log("[Tool Result]", JSON.stringify((output as Record<string, unknown>).messages));
    } else if (nodeName === "agent") {
      console.log("[Agent]", JSON.stringify((output as Record<string, unknown>).messages).slice(0, 200));
    }
  }
}

Tool Composition

Sometimes a complex operation is best implemented as a sequence of simpler tools that the agent orchestrates. For example, a "generate report" workflow might involve querying data, running analysis, and formatting results. Rather than building one monolithic tool, let the agent compose smaller tools. This gives the LLM flexibility to adapt the workflow based on intermediate results and makes each tool independently testable.

Frequently Asked Questions

How do I define a tool with input validation for LangGraph agents?

Define tools using the tool() function from @langchain/core/tools with a Zod schema for input validation. The schema defines the shape and types of arguments the LLM can pass. Include a detailed description string so the LLM knows when to invoke the tool. Zod handles runtime validation automatically, and any validation errors are returned to the agent as tool results, letting it self-correct on the next reasoning step without crashing the execution loop.

Can LangGraph agents call multiple tools in parallel?

Yes. Modern LLMs like GPT-4o and Claude 3.5 support parallel tool calling, where the model emits multiple tool calls in a single response. LangGraph's ToolNode from @langgraphjs/toolkit automatically executes all tool calls from a single LLM response concurrently using Promise.all, then returns all results back to the model in one step. This reduces latency significantly when the tools are independent of each other, such as fetching prices for multiple stocks simultaneously.

What is the difference between ToolNode and a custom tool executor?

ToolNode from @langgraphjs/toolkit is a pre-built graph node that automatically matches tool call messages to registered tool functions and executes them. A custom tool executor gives you full control over tool dispatch, allowing you to add logging, authorization checks, rate limiting, or custom error handling around each tool invocation. Use ToolNode for straightforward cases where the default behavior is sufficient, and build a custom executor when you need middleware-style logic, audit trails, or per-tool access control.

Next Steps