Next.js 16 is Here: Why proxy.ts Changes Everything

By LearnWebCraft Team13 min readAdvanced
Next.js 16proxy.tsweb architectureAPI routesTypeScript

If you’ve been building with Next.js for as long as I have, you know the specific kind of migraine that comes with managing API proxies. We’ve all been there. You're deep in the flow, wiring up a frontend to a legacy backend service, and suddenly—bam—you hit a CORS error. Or worse, you need to inject a secret token into an upstream request, but doing it in middleware.ts feels clunky, and next.config.js rewrites are just... well, static.

The release of Next.js 16 has dropped a feature that I honestly think we’ll look back on as a watershed moment for full-stack React architecture: proxy.ts.

This isn't just a minor utility update. It feels like a fundamental shift in how we handle the boundary between our client-side applications and the myriad of services they consume. Gone are the days of restarting your dev server every time you tweak a rewrite destination. We’re talking about a programmable, type-safe, runtime proxy layer that sits right at the edge.

In this deep dive, we aren't going to look at "Hello World." You can find that in the docs. We’re going to look at the architecture, the performance implications, and how proxy.ts in Next.js 16 fundamentally changes the game for advanced web development.

Introduction: The Evolution of Next.js and What's New

Next.js has always been about blurring the line between client and server. First, we had getInitialProps, then getServerSideProps, and recently the paradigm shift of React Server Components (RSC) and Server Actions. But through all of this, the story around proxying traffic—taking a request from a user and forwarding it to another service—has felt a bit like a second-class citizen.

Historically, we treated Next.js as the "frontend" and everything else as "the backend." But modern web architecture is rarely that binary. We are building BFFs (Backends for Frontends). We are orchestrating microservices. We are stitching together SaaS APIs.

With Next.js 16, the Vercel team seems to have acknowledged that the "middleman" role of the framework is just as critical as the rendering role. By introducing proxy.ts, they’ve given us a dedicated lifecycle hook specifically designed for traffic shaping and request forwarding.

This isn't just Middleware 2.0. While Middleware is great for redirects and auth checks, it was never really optimized for the heavy lifting of modifying request bodies or handling complex upstream handshakes efficiently. proxy.ts fills that gap, providing a dedicated space to manage your application's external nervous system.

Understanding the Pre-Next.js 16 Proxy Landscape

To appreciate where we are, we have to look at the somewhat messy landscape we’re leaving behind. If you’re running a production Next.js 14 or 15 app right now, your proxying strategy probably relies on one of three imperfect methods.

1. next.config.js Rewrites

This is the classic approach. You define a source path and a destination path.

module.exports = {
  async rewrites() {
    return [
      {
        source: '/api/:path*',
        destination: 'https://api.example.com/:path*',
      },
    ]
  },
}

The problem? It’s static. You can’t execute logic here. You can’t inspect a JWT, decode it, and decide to route to api-v1.example.com or api-v2.example.com based on a claim. Plus, changing it requires a server restart. In a CI/CD pipeline, that’s fine. In local development? It’s a flow-killer.

2. Middleware (middleware.ts)

Middleware is powerful, but it operates at a routing level. While you can rewrite requests in Middleware, modifying the request body or handling the response body from the upstream service is notoriously difficult (and often performantly expensive) due to the streaming nature of the Edge runtime. You often run into the "stream already read" error or have to do buffer gymnastics that feel fragile.

3. Custom Server (Express/Node)

The nuclear option. You ditch the standard Next.js start command and wrap the whole thing in an Express server using http-proxy-middleware. The problem? You lose many of Next.js's optimization benefits. You lose Automatic Static Optimization (ASO) in some cases, and you can’t deploy to Vercel or other serverless platforms as easily because you’re shipping a long-running Node process. It essentially breaks the "serverless" dream.

This fragmented landscape is exactly what proxy.ts aims to unify.

Introducing proxy.ts: The New Paradigm

So, what exactly is this file?

Think of proxy.ts as a specialized interceptor that sits after Middleware but before your page rendering or API routes. It is specifically designed to handle the fetch interactions between your Next.js server and the outside world.

It lives in the root of your project (or inside src/), just like middleware.ts. But unlike Middleware, which is broad and routing-focused, proxy.ts provides a dedicated API for:

  1. Request Mutation: Modifying headers, bodies, and query params before they hit the upstream.
  2. Response Transformation: Intercepting the upstream response to normalize data, handle errors, or sanitize headers before sending it back to the client.
  3. Dynamic Routing: Changing the destination URL based on runtime logic, not just static patterns.

It’s the "Backend for Frontend" logic, extracted into a clean, standard file.

How proxy.ts Works: A Technical Deep Dive

Let’s get into the code. The syntax feels familiar if you've used Next.js API routes, but it's more aligned with the Web Streams API standard that Next.js 16 embraces fully.

The default export is a function that receives a ProxyRequest and returns a ProxyResponse (or a Promise resolving to one).

// src/proxy.ts
import { type ProxyRequest, NextResponse } from 'next/server';

export const config = {
  matcher: '/api/external/:path*', // Only run on these paths
};

export default async function proxy(req: ProxyRequest) {
  const url = req.nextUrl;
  
  // 1. Dynamic Upstream Resolution
  // Imagine routing based on a user's region or A/B test group
  const upstream = req.cookies.get('beta-mode') 
    ? 'https://beta-api.internal.com' 
    : 'https://api.internal.com';
    
  // Construct the new destination
  const path = url.pathname.replace('/api/external', '');
  const destinationUrl = new URL(path, upstream);
  
  // 2. Request Decoration
  // Securely inject API keys without exposing them to the client
  const headers = new Headers(req.headers);
  headers.set('Authorization', `Bearer ${process.env.UPSTREAM_API_KEY}`);
  headers.set('X-Forwarded-Host', 'my-next-app.com');

  try {
    // 3. The Fetch
    // Next.js 16 optimizes this fetch automatically for connection pooling
    const upstreamResponse = await fetch(destinationUrl, {
      method: req.method,
      headers: headers,
      body: req.body, // Streaming body transfer
      // duplex: 'half' is handled automatically now!
    });

    // 4. Response Interception
    // Handle global error states from upstream
    if (upstreamResponse.status === 401) {
       // Maybe refresh a token and retry? 
       // Or return a standardized error format
       return NextResponse.json(
         { error: "Upstream session expired", code: "AUTH_001" },
         { status: 401 }
       );
    }

    return upstreamResponse;
    
  } catch (err) {
    console.error("Proxy failure:", err);
    return new Response("Bad Gateway", { status: 502 });
  }
}

The Architecture Behind the Scenes

What makes this technically superior to a manual API route?

The Runtime Environment: proxy.ts runs on the Edge (by default) or Node.js, but it's optimized for I/O. In Next.js 16, the Vercel team improved the underlying fetch implementation to handle duplex streaming out of the box. This means if your user uploads a 5GB file, proxy.ts streams it chunk-by-chunk to the upstream server without buffering it into memory. Doing this manually in a Next.js 14 API Route required intricate knowledge of Node streams vs. Web streams. Now, it’s just req.body.

Connection Pooling: One of the hidden killers in serverless is opening a new TCP connection for every request. Next.js 16’s proxy layer implements smarter connection keep-alive logic for requests passing through proxy.ts, significantly reducing latency for high-throughput microservices.

Key Benefits: Enhanced Security, Performance, and DX

Why should you care? Why bother migrating?

1. Enhanced Security (The "Hidden Token" Pattern)

I can't count how many times I've seen API keys leaked in client-side bundles. With proxy.ts, you can truly embrace the BFF security pattern. Your frontend only talks to your Next.js server (e.g., /api/external/users). The Next.js server talks to your real backend (api.backend.com), injecting the sensitive Authorization headers or mTLS certificates. The browser never sees the upstream URL or the credentials.

2. Performance at the Edge

Because proxy.ts can run on the Edge runtime, it executes geographically closer to your user. If you are doing simple header injection or path rewriting, this happens in milliseconds. Furthermore, by utilizing the new caching primitives in Next.js 16, you can cache the response from the proxy layer using standard Cache-Control headers, effectively turning your Next.js app into a CDN for your API.

3. Developer Experience (DX)

This is the fun part. Hot Reloading. Changing a rewrite in next.config.js takes 5-10 seconds to restart the server. Changing logic in proxy.ts is instant (Fast Refresh). When you're debugging a tricky webhook integration or tweaking header formats, this saves hours over the course of a week.

4. Type Safety

It's TypeScript. You get full type inference on the request and response objects. No more guessing if req.query is a string or an array of strings.

Practical Applications: Use Cases and Examples

Let’s look at some real-world scenarios where proxy.ts shines.

Use Case A: The Strangler Fig Pattern

You are migrating from a legacy Monolith (PHP/Java) to a new Microservices architecture. You want the frontend to feel unified.

In proxy.ts, you can route traffic based on the path:

// Logic inside proxy.ts
const path = req.nextUrl.pathname;

if (path.startsWith('/api/new-features')) {
  // Route to modern Node.js microservice
  return fetch(new URL(path, process.env.MICROSERVICE_URL), ...);
} else {
  // Fallback to legacy monolith
  return fetch(new URL(path, process.env.LEGACY_URL), ...);
}

This allows you to slowly "strangle" the legacy app, endpoint by endpoint, without the frontend code ever knowing the difference.

Use Case B: Third-Party API Masking

You're using a third-party service like a CMS or a payment provider that has strict CORS policies or rate limits.

You can use proxy.ts to:

  1. Cache aggressively: If the CMS data changes rarely, cache the response in Next.js so you don't hit the CMS rate limits.
  2. Sanitize data: Remove internal fields from the JSON response before sending it to the browser.
  3. Handle Retries: Implement exponential backoff logic inside the proxy if the third-party service is flaky.

Use Case C: Multi-Tenant Architecture

If you are building a SaaS where tenant-a.myapp.com and tenant-b.myapp.com need to hit different database clusters or API namespaces, proxy.ts can parse the Host header and rewrite the upstream destination dynamically.

const host = req.headers.get('host'); // e.g., tenant-a.myapp.com
const tenantId = host.split('.')[0];
const targetApi = `https://api-${tenantId}.internal-cluster.com`;

Try doing that cleanly in next.config.js. You can't.

Migration Guide: Updating Your Existing Next.js Projects

Moving to proxy.ts isn't mandatory, but if you have complex rewrites, it's highly recommended.

Step 1: Audit next.config.js Identify your rewrites.

// Old way
rewrites() {
  return [{ source: '/api/:path*', destination: 'http://backend/:path*' }]
}

Step 2: Create src/proxy.ts Create the file and implement the fetching logic.

Step 3: Remove the Config Rewrite Delete the entry from next.config.js. If you leave both, the config rewrite might take precedence or cause conflicts depending on the specificity.

Step 4: Handle Cookies Remember that fetch in proxy.ts doesn't automatically forward cookies unless you tell it to.

const cookieHeader = req.headers.get('cookie');
if (cookieHeader) {
  headers.set('cookie', cookieHeader);
}

Note: This is a common "gotcha." Next.js 16 might add a helper for this, but always verify your auth flow.

Potential Challenges and Best Practices

It’s not all sunshine and rainbows. There are architectural trade-offs to consider here.

Challenge 1: The "Double Hop" Latency

Every request hitting proxy.ts adds a hop. Client -> Next.js Edge -> Upstream Service -> Next.js Edge -> Client. While Next.js is fast, this adds latency. Best Practice: Only proxy what you must. If a request can go directly to a service (e.g., a public read-only API with CORS enabled), let it go direct. Use proxy.ts for authenticated or complex requests.

Challenge 2: Cold Starts

If you deploy proxy.ts to a Serverless environment (like Vercel or AWS Lambda), complex logic or heavy imports can increase cold start times. Best Practice: Keep proxy.ts lean. Don't import your entire database ORM here. It should be a traffic cop, not a database admin.

Challenge 3: Streaming Complexity

While Next.js 16 handles streaming well, debugging a broken stream can be a nightmare. If the upstream server closes the connection abruptly, your proxy needs to handle that error gracefully so the client doesn't hang. Best Practice: Always wrap your fetches in try/catch blocks and implement timeouts. fetch defaults to no timeout in many environments; you don't want a request hanging for 2 minutes.

The Future of Next.js with proxy.ts

This feature signals a maturing of the React ecosystem. We are moving away from "React is just a view layer" to "Next.js is a full-stack application framework."

I predict that in future minor versions (16.1, 16.2), we will see:

  • Middleware Integration: Tighter coupling between middleware.ts (auth) and proxy.ts (routing), perhaps passing context objects between them.
  • Plugin Ecosystem: NPM packages that export proxy.ts handlers. Imagine installing a package like @auth0/nextjs-proxy that automatically handles token rotation and proxying with zero boilerplate.
  • Observability: Built-in OpenTelemetry traces that show the span of the proxy request separate from the upstream duration.

Conclusion: Embracing the Next Generation of Web Development

Next.js 16 and proxy.ts represent a significant leap forward in developer ergonomics and architectural capability. It takes the messy, imperative glue code we used to write in custom servers and standardizes it into a declarative, type-safe, and performant primitive.

For the junior dev, it simplifies how to connect to an API. For the senior architect, it opens up powerful patterns for micro-frontends, legacy strangulation, and edge-computed security.

If you haven't tried the Next.js 16 beta yet, spin it up. Delete your rewrites array. Create a proxy.ts. Once you feel the power of full programmatic control over your traffic, you won't go back to static config files.

The web is dynamic. Your proxy should be too.


Frequently Asked Questions

Is proxy.ts a replacement for Middleware? No. Middleware is best suited for routing decisions (redirects) and protecting routes (checking for auth cookies) before a request is processed. proxy.ts is specifically for handling the actual fetching and forwarding of requests to upstream services, allowing for body modification and response transformation.

Can I use proxy.ts with the App Router? Absolutely. proxy.ts is agnostic to the Pages or App router. It sits at the network entry level of the Next.js server, handling requests before they even reach your React components or Route Handlers.

Does this work on self-hosted Next.js? Yes. While Vercel optimizes the edge deployment, proxy.ts works perfectly in a Docker container or a standard Node.js server deployment. The streaming capabilities rely on the underlying Node.js or Web Standards implementation, which Next.js polyfills where necessary.

How does this impact SEO? For API routes, there is no direct SEO impact. However, if you use proxy.ts to aggregate data for server-side rendering, the performance gains (connection pooling) can improve your Core Web Vitals (specifically TTFB), which is a ranking factor.

Related Articles