Build Agentic AI: LangGraph & Next.js 15 Full-Stack Guide

By LearnWebCraft Team18 min readIntermediate
agentic ailanggraphnextjs 15ai agentsfull stack ai

I still remember the first time I seriously sat down with ChatGPT. It felt a bit like magic, didn't it? You typed a prompt into a void, and it returned poetry, working code, or a surprisingly decent recipe for banana bread. But as the novelty started to fade, I began hitting a wall. I realized I didn't just want an AI that could talk to me; I wanted an AI that could actually do things for me.

I wanted an assistant that could look at my calendar, cross-reference it with flight prices, and actually book the trip. I wanted a developer bot that didn't just write a function, but ran the tests, saw the failure, and fixed its own mistakes.

Welcome to the world of Agentic AI.

If you’ve been hanging around the AI engineering water cooler lately, you’ve almost certainly heard this term thrown around. But let's strip away the buzzwords for a second. What does it actually mean? Simply put, Agentic AI is the shift from passive "chatbots" to active "agents." A chatbot waits for you to speak. An agent has a goal, a set of tools, and the autonomy to figure out how to use those tools to get the job done. It loops, it reasons, and it corrects itself.

Today, we aren't just going to talk theory. We are going to build a real, functioning AI agent together. And we’re going to do it using the absolute bleeding edge of the web stack: LangGraph for the agent orchestration and Next.js 15 for the full-stack framework.

Why this specific combination? Because, honestly, building agents is hard. Managing the state of an AI conversation—knowing what the AI has done, what it needs to do next, and handling errors gracefully—can become a nightmare with standard scripting. LangGraph brings structure to that chaos. And Next.js 15? It provides the perfect server-side infrastructure to run these heavy computations while giving us a buttery smooth frontend.

By the end of this guide, you won't just have a tutorial under your belt. You'll have a blueprint for the future of software development. Let's get to work.

Why Choose LangGraph and Next.js 15?

You might be asking yourself, "Hey, why not just use a simple while loop and some OpenAI API calls?" Or, "Why not stick to the standard LangChain chains I already know?"

That's a fair question. I used to think linear chains were enough, too. But here's the thing I've learned the hard way—real-world problems are rarely linear. They are messy. They require loops.

The Case for LangGraph

Traditional AI chains are Directed Acyclic Graphs (DAGs). Step A leads to Step B, which leads to Step C. It’s a waterfall. But what happens if Step B fails? What if Step B realizes it needs more information and needs to go back to Step A? In a standard chain, you're kind of stuck. You usually have to restart the whole process or write spaghetti code to handle the exceptions.

LangGraph changes the game by introducing cycles. It models your agent's logic as a graph where nodes can loop back on themselves. It’s stateful. It remembers exactly where it is in the process.

Imagine you're building a research agent:

  1. Agent: "I need to search for the weather."
  2. Tool: "It's raining."
  3. Agent: "Okay, based on that, I need to search for indoor activities."

A linear chain struggles with this dynamic decision-making. LangGraph thrives on it. It treats the agent like a state machine. It’s cleaner, easier to debug, and infinitely more powerful.

The Power of Next.js 15

On the other side of the equation, we have Next.js 15. If you've been following the Vercel ecosystem, you know they've been pushing hard on the AI front. Next.js 15 isn't just a version bump; it’s a refinement of the React Server Components (RSC) model that is critical for AI apps.

When you build an agent, you are running logic that takes time. LLMs can be slow. You absolutely cannot block the main thread. Next.js 15’s support for streaming and Server Actions allows us to trigger complex agent workflows on the server and stream the thoughts, tool calls, and final answers back to the client in real-time.

Plus, with the new caching strategies in Next.js 15, we can optimize our API calls, saving money on tokens and reducing latency. It’s the perfect host for our silicon brain.

Setting Up Your Next.js 15 Development Environment

Alright, enough talk. Let’s get our hands dirty. I love this part—the blank canvas.

We’re going to start by initializing a new Next.js 15 project. Ensure you have Node.js installed (v18 or higher is recommended for the best experience with the new features).

Go ahead and pop open your terminal. I’m using iTerm2, but VS Code’s terminal works just fine.

npx create-next-app@latest agentic-app

You’ll be bombarded with a few prompts. Here is my recommended config for this tutorial to keep things smooth:

  • TypeScript: Yes (Always. Trust me, you want types for your agent state).
  • ESLint: Yes.
  • Tailwind CSS: Yes (We want this to look pretty).
  • src/ directory: Yes.
  • App Router: Yes (This is mandatory for what we are doing).
  • Turbopack: Yes (It makes dev server start-up insanely fast).
  • Import alias: @/* (Default).

Once that’s done, cd into your new folder:

cd agentic-app

Now, we need the heavy hitters. We need LangGraph, the LangChain core, and the OpenAI integration. We also need zod for schema validation, which is essentially the glue that holds structured outputs together.

npm install @langchain/langgraph @langchain/core @langchain/openai zod

We also need to install the Vercel AI SDK. It helps massively with the UI connection, although we will be doing some custom wiring to really understand how LangGraph fits in.

npm install ai

Finally, create a .env.local file in your root directory. You know the drill—we need your OpenAI API key.

OPENAI_API_KEY=sk-...

Pro Tip: If you are working on a team, never commit this file. I once accidentally pushed a key to a public repo and woke up to a $500 bill. Learn from my pain.

Core Concepts of LangGraph: State, Nodes, and Edges

Before we write the code, let's get on the same page regarding the mental model. LangGraph introduces a few concepts that might feel foreign if you're coming from a pure React background, but they are actually quite similar to state management libraries like Redux.

1. The State

In LangGraph, State is the single source of truth. It’s an object that is passed around between different parts of your agent. Imagine it as a shared whiteboard in a meeting room. Every "worker" (node) in your agent can read the whiteboard and write new information onto it.

For our agent, the state might look like this:

  • messages: A list of chat messages (User, AI, System).
  • current_step: What are we doing right now?
  • research_results: Data we found from tools.

2. Nodes

Nodes are the workers. They are simply JavaScript functions. They take the current State as input, do some work (like calling an LLM or formatting data), and return a partial update to the State.

  • Node A (The Brain): Looks at the user input and decides what to do.
  • Node B (The Tool): Performs a Google search.
  • Node C (The Responder): Formats the final answer.

3. Edges

Edges define the control flow. They connect the nodes.

  • Normal Edges: Go from Node A to Node B.
  • Conditional Edges: This is the magic. "If the LLM says 'search', go to the Search Node. If the LLM says 'I'm done', go to the End."

This structure allows us to build cyclic graphs. The agent can loop between "Thinking" and "Acting" as many times as it needs until it solves the problem.

Designing Your First AI Agent Workflow

Let’s build something useful. A generic "Chatbot" is a bit boring. Let's build a "Fact-Checking Agent".

Here is the workflow we want to achieve:

  1. Input: The user asks a question (e.g., "Is it true that octopuses have three hearts?").
  2. Orchestrator Node: An LLM analyzes the question. It decides if it knows the answer confidently or if it needs to use a tool to verify.
  3. Tool Node: If verification is needed, it simulates a search (we'll mock this for simplicity, or use a real search tool if you have an API key for Tavily/Bing).
  4. Loop: The results come back to the Orchestrator. The Orchestrator looks at the new info.
  5. Response: The Orchestrator formulates the final answer.

Visually, it looks like a loop.

Start -> [Orchestrator] --(needs info)--> [Tool]
             ^                               |
             |_________(results)_____________|
             
[Orchestrator] --(has answer)--> End

This simple loop is the foundation of almost all complex agents. It’s the "Reasoning Loop."

Implementing the Agent Logic on the Server

Now, let's implement this using LangGraph. We will create a dedicated file for our agent logic to keep our Next.js structure clean.

Create a file at src/app/agent/graph.ts.

First, let's define our State. We'll use the MessageGraph or a custom state using Annotation, which is the modern LangGraph way.

import { AIMessage, BaseMessage, HumanMessage } from "@langchain/core/messages";
import { StateGraph, END, Annotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { z } from "zod";
import { tool } from "@langchain/core/tools";

// 1. Define the State
// We use the Annotation helper to define the shape of our graph state.
// "messages" is a special key that handles appending new messages automatically.
const AgentState = Annotation.Root({
  messages: Annotation<BaseMessage[]>({
    reducer: (x, y) => x.concat(y),
  }),
});

// 2. Define the Tools
// Let's create a simple tool for the agent to use.
// In a real app, this would fetch data from an API.
const verifyFactTool = tool(
  async ({ claim }: { claim: string }) => {
    console.log(`🔎 Verifying claim: ${claim}`);
    // Simulating a network delay and a lookup
    await new Promise(resolve => setTimeout(resolve, 1000));
    
    if (claim.toLowerCase().includes("octopus")) {
      return "Verified: Octopuses indeed have three hearts. Two pump blood to the gills, one to the rest of the body.";
    }
    return "Verified: No specific records found, but generally assume standard biology unless specified.";
  },
  {
    name: "verify_fact",
    description: "Verifies a fact or claim by searching the database.",
    schema: z.object({
      claim: z.string().describe("The claim or fact to verify"),
    }),
  }
);

const tools = [verifyFactTool];
const toolNode = new ToolNode(tools);

// 3. Initialize the Model
// We bind the tools to the model so it knows they exist.
const model = new ChatOpenAI({
  model: "gpt-4o", // or gpt-3.5-turbo
  temperature: 0,
}).bindTools(tools);

// 4. Define the Nodes

// The "agent" node decides what to do.
async function agentNode(state: typeof AgentState.State) {
  const { messages } = state;
  const result = await model.invoke(messages);
  return { messages: [result] };
}

// 5. Define Conditional Logic
// Should we continue to tools or end?
function shouldContinue(state: typeof AgentState.State) {
  const lastMessage = state.messages[state.messages.length - 1];
  
  // If the LLM made a tool call, go to "tools"
  if (lastMessage && 'tool_calls' in lastMessage && Array.isArray(lastMessage.tool_calls) && lastMessage.tool_calls.length > 0) {
    return "tools";
  }
  
  // Otherwise, stop
  return END;
}

// 6. Construct the Graph
const workflow = new StateGraph(AgentState)
  .addNode("agent", agentNode)
  .addNode("tools", toolNode)
  .addEdge("__start__", "agent")
  .addConditionalEdges("agent", shouldContinue)
  .addEdge("tools", "agent"); // Loop back to agent after tool use

// 7. Compile into a Runnable
export const graph = workflow.compile();

Take a moment to read through that code. It’s elegant, isn't it? We aren't manually managing if/else statements for the conversation flow. We just defined the nodes and the edges, and LangGraph handles the orchestration.

Notice the addEdge("tools", "agent"). This is the cycle! After the tool runs, we force the execution back to the agent node. The agent then sees the tool output (which is added to the messages state automatically) and decides if it's done.

Building a Responsive UI with React and Tailwind CSS

We have a brain. Now we need a face.

We’ll build a simple chat interface in src/app/page.tsx. We want a scrollable message area and an input box fixed to the bottom.

I’m a sucker for clean, dark-mode UIs, so let’s use some Tailwind magic to make this pop.

"use client";

import { useState } from "react";
import { HumanMessage, AIMessage } from "@langchain/core/messages";

// We'll define a simple type for our UI messages
type Message = {
  id: string;
  role: "user" | "assistant";
  content: string;
};

export default function Home() {
  const [input, setInput] = useState("");
  const [messages, setMessages] = useState<Message[]>([]);
  const [isLoading, setIsLoading] = useState(false);

  const handleSubmit = async (e: React.FormEvent) => {
    e.preventDefault();
    if (!input.trim()) return;

    const userMessage: Message = {
      id: Date.now().toString(),
      role: "user",
      content: input,
    };

    setMessages((prev) => [...prev, userMessage]);
    setInput("");
    setIsLoading(true);

    // TODO: Call Server Action here
    // For now, we just simulate loading
  };

  return (
    <main className="flex min-h-screen flex-col items-center bg-gray-900 text-gray-100">
      <div className="w-full max-w-2xl flex-1 flex flex-col p-4">
        <header className="mb-8 text-center mt-10">
          <h1 className="text-3xl font-bold bg-gradient-to-r from-blue-400 to-purple-500 text-transparent bg-clip-text">
            Agentic Fact-Checker
          </h1>
          <p className="text-gray-400 mt-2">Powered by LangGraph & Next.js 15</p>
        </header>

        <div className="flex-1 space-y-4 overflow-y-auto mb-4 p-4 rounded-lg bg-gray-800/50 min-h-[400px]">
          {messages.map((m) => (
            <div
              key={m.id}
              className={`flex ${
                m.role === "user" ? "justify-end" : "justify-start"
              }`}
            >
              <div
                className={`max-w-[80%] rounded-lg p-3 ${
                  m.role === "user"
                    ? "bg-blue-600 text-white"
                    : "bg-gray-700 text-gray-200"
                }`}
              >
                {m.content}
              </div>
            </div>
          ))}
          {isLoading && (
            <div className="flex justify-start">
              <div className="bg-gray-700 rounded-lg p-3 animate-pulse">
                Thinking...
              </div>
            </div>
          )}
        </div>

        <form onSubmit={handleSubmit} className="flex gap-2">
          <input
            className="flex-1 p-3 rounded-lg bg-gray-800 border border-gray-700 focus:outline-none focus:border-blue-500 transition"
            value={input}
            onChange={(e) => setInput(e.target.value)}
            placeholder="Ask me a fact..."
          />
          <button
            type="submit"
            disabled={isLoading}
            className="bg-blue-600 hover:bg-blue-700 px-6 py-3 rounded-lg font-medium transition disabled:opacity-50"
          >
            Send
          </button>
        </form>
      </div>
    </main>
  );
}

This is standard React stuff, but notice we haven't connected the logic yet. That's where Next.js 15 Server Actions come in.

Connecting Frontend and Backend with Server Actions

In the old days (like, way back in 2022), we would have created an API route at /api/chat, handled the POST request, serialized the data, and sent it back.

With Next.js 15 Server Actions, we can simply export a function that runs on the server and call it directly from our client component. It feels like calling a local function, but the heavy lifting happens over the network.

Create a new file src/app/actions.ts.

"use server";

import { graph } from "./agent/graph";
import { HumanMessage } from "@langchain/core/messages";

export async function runAgent(query: string) {
  // We create a HumanMessage object
  const inputs = {
    messages: [new HumanMessage(query)],
  };

  // invoke() runs the graph until it hits END.
  // It returns the final state.
  const finalState = await graph.invoke(inputs);

  // We extract the last message, which should be the AI's final answer
  const lastMessage = finalState.messages[finalState.messages.length - 1];
  
  return lastMessage.content as string;
}

Wait, this is almost too simple, right? There's a catch. graph.invoke waits for the entire process to finish. If our agent decides to do 3 loops of searching and reasoning, the user is staring at a loading spinner for 10 seconds.

In the world of AI, latency is the enemy of good UX. We need streaming.

Handling Streaming Responses for Real-Time Interaction

This is the trickiest part of integrating LangGraph with a frontend, and honestly, it's where most tutorials leave you hanging. LangGraph doesn't just output text; it outputs events. It tells you when it enters a node, when a tool is called, and when a token is generated.

We need to stream these updates to the client so the user knows something is happening.

Next.js Server Actions support returning a StreamableValue using the ai SDK. Let's upgrade our action to stream the text.

We will modify src/app/actions.ts to use streamText logic or, even better, manual generator streaming which is more transparent for learning purposes.

However, since we are using LangGraph, the most robust way is to return a stream of chunks.

Revised src/app/actions.ts:

"use server";

import { graph } from "./agent/graph";
import { HumanMessage } from "@langchain/core/messages";
import { createStreamableValue } from "ai/rsc";

// We use 'ai/rsc' to create a stream capable of crossing the server-client boundary
export async function streamAgentResponse(query: string) {
  const stream = createStreamableValue("");

  (async () => {
    const inputs = {
      messages: [new HumanMessage(query)],
    };

    // streamEvents gives us granular control
    const eventStream = await graph.streamEvents(inputs, {
      version: "v2",
    });

    for await (const event of eventStream) {
      // We focus on the 'on_chat_model_stream' event
      // This is where the actual text tokens are generated by the LLM
      if (event.event === "on_chat_model_stream") {
        const chunk = event.data.chunk;
        if (chunk && chunk.content) {
          stream.update(chunk.content);
        }
      }
      // You could also listen for 'on_tool_start' to show "Searching..." in the UI
    }

    stream.done();
  })();

  return { output: stream.value };
}

Now, let's update src/app/page.tsx to consume this stream.

// inside Home component
import { readStreamableValue } from "ai/rsc";
import { streamAgentResponse } from "./actions";

// ... inside handleSubmit
    setIsLoading(true);
    
    // Create a placeholder for the AI response
    const assistantMessageId = Date.now().toString() + "-ai";
    setMessages((prev) => [
      ...prev, 
      { id: assistantMessageId, role: "assistant", content: "" }
    ]);

    try {
      const { output } = await streamAgentResponse(input);

      let accumulatedContent = "";
      
      for await (const delta of readStreamableValue(output)) {
        accumulatedContent += delta;
        
        // Update the last message with the new content
        setMessages((prev) => {
          const newMessages = [...prev];
          const lastMsg = newMessages.find(m => m.id === assistantMessageId);
          if (lastMsg) {
            lastMsg.content = accumulatedContent;
          }
          return newMessages;
        });
      }
    } catch (error) {
      console.error("Agent error:", error);
    } finally {
      setIsLoading(false);
    }

This is the "aha!" moment. When you run this, you won't just see the answer pop up at the end. You will see the agent "typing." If you expanded the streamEvents logic, you could even render UI updates like "Checking database..." or "Verifying claim..." before the text starts streaming. That is true Agentic UX.

Deploying Your Agentic App to Vercel

We’ve built it locally. It works. But it doesn't effectively exist until it’s on the internet.

Deploying Next.js 15 apps to Vercel is usually seamless, but AI apps have a few specific caveats: Timeouts.

By default, Vercel Serverless Functions have a timeout (usually 10-60 seconds depending on your plan). Agentic workflows involving multiple tool calls can easily exceed this limit.

Step 1: Configuration

In your next.config.mjs, you might need to configure the maxDuration for your server actions if you are on a Pro plan.

/** @type {import('next').NextConfig} */
const nextConfig = {
  experimental: {
    serverActions: {
      bodySizeLimit: '2mb',
    },
  },
};

export default nextConfig;

In your route or action file, you can also export a config:

export const maxDuration = 60; // This sets the timeout to 60 seconds

Step 2: Push to Git

Initialize git, commit your changes, and push to GitHub.

Step 3: Vercel Dashboard

Go to Vercel and import your repo. Crucial Step: Add your OPENAI_API_KEY in the Environment Variables section before you hit Deploy.

Once it builds, you have a live URL. You can send it to your friends and say, "Look, I built an AI that actually thinks."

Conclusion and Next Steps

We’ve covered a massive amount of ground here. We didn't just paste some API boilerplates; we architected a system.

We explored why LangGraph's cyclic nature is superior for agentic behaviors compared to linear chains. We leveraged Next.js 15's Server Actions and Streaming to bridge the gap between a heavy backend process and a snappy frontend. We built a fact-checking agent that has the autonomy to decide when to use tools.

This is just the beginning. Here is where you can take this next:

  1. Persistent Memory: Use LangGraph’s checkpointer functionality (requires a database like Postgres) to save the state of the conversation. This allows users to come back days later and pick up the thread.
  2. Human-in-the-loop: Add a node in your graph that pauses execution and waits for human approval before executing a sensitive action (like sending an email).
  3. Multi-Agent Systems: Create a graph where nodes are other graphs. Imagine a "Research Agent" passing work to a "Writing Agent."

The era of static software is ending. We are entering the era of probabilistic, agentic software. And you? You just built your first piece of it.

Now, go build something that surprises you.


Frequently Asked Questions

What is the difference between LangChain and LangGraph? LangChain is a framework for building applications with LLMs, primarily focusing on linear chains (DAGs). LangGraph is an extension built on top of LangChain specifically designed for building stateful, multi-actor applications with cyclic flows (loops), which are essential for complex agents.

Do I need Python to use LangGraph? No! While LangGraph started in Python, the JavaScript version (@langchain/langgraph) is fully featured and robust. It integrates perfectly with Next.js, allowing you to keep your entire stack in TypeScript.

Is Next.js 15 stable enough for production AI apps? Yes. While "latest" versions always carry some risk, the core features used here (App Router, Server Actions) are stable. Next.js 15 improves upon the stability and performance of these features, specifically optimizing for the streaming requirements of AI applications.

How do I handle API costs with autonomous agents? Agents can be expensive because they loop and call LLMs multiple times. It is crucial to implement safeguards, such as a maximum number of iterations (e.g., a "recursion limit") in your graph logic to prevent infinite loops and runaway costs.

Related Articles