Deepak Kumar

Get in Touch

Let's discuss your next project

All Articles
Next.jsAI/MLOpenAIVercelStreamingEdge Functions

Integrating AI into Next.js Apps: OpenAI, Streaming & Edge Functions

A practical guide to adding AI-powered features to Next.js applications using OpenAI API, Vercel AI SDK, streaming responses, and edge functions for low-latency inference.

·· 9 min read

Adding AI to your Next.js app doesn't have to be complex. Here's my battle-tested approach from building AI features at India Today Group — including streaming responses, edge deployment, and error handling.

1. Setup: Vercel AI SDK

The Vercel AI SDK makes streaming AI responses trivial in Next.js:

npm install ai openai

2. API Route with Streaming

// app/api/chat/route.ts
import { OpenAI } from 'openai';
import { OpenAIStream, StreamingTextResponse } from 'ai';

export const runtime = 'edge'; // Deploy to edge for low latency

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

export async function POST(req: Request) {
  const { messages } = await req.json();

  const response = await openai.chat.completions.create({
    model: 'gpt-4-turbo-preview',
    stream: true,
    messages,
  });

  const stream = OpenAIStream(response);
  return new StreamingTextResponse(stream);
}

3. Client-Side Hook

// components/Chat.tsx
'use client';
import { useChat } from 'ai/react';

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat();

  return (
    <div>
      {messages.map(m => (
        <div key={m.id} className={m.role === 'user' ? 'user-msg' : 'ai-msg'}>
          {m.content}
        </div>
      ))}
      <form onSubmit={handleSubmit}>
        <input value={input} onChange={handleInputChange} placeholder="Ask anything..." />
        <button type="submit" disabled={isLoading}>Send</button>
      </form>
    </div>
  );
}

4. AI-Powered Article Summarization

One of our most popular features — editors paste a 2000-word article and get a 3-line summary instantly:

const summary = await openai.chat.completions.create({
  model: 'gpt-4-turbo-preview',
  messages: [{
    role: 'system',
    content: 'Summarize the following news article in exactly 3 concise sentences for an Indian audience.'
  }, {
    role: 'user',
    content: articleText
  }],
  max_tokens: 200,
  temperature: 0.3, // Low temperature for factual accuracy
});

5. Edge Functions for Speed

By deploying AI routes to Vercel Edge Functions, we reduced response latency from ~800ms to ~200ms for the initial token. The runtime = 'edge' directive is all it takes.

6. Error Handling & Rate Limiting

In production, always handle: API rate limits (implement exponential backoff), token limits (truncate context), and model fallbacks (GPT-4 → GPT-3.5 on quota errors).

Key Takeaway

AI integration in Next.js is now a first-class experience. With streaming, edge functions, and the Vercel AI SDK, you can ship AI features that feel instant — not like waiting for a loading spinner.

Deepak Kumar

Sr Software Engineer at India Today Group

MERN Stack · Generative AI · Next.js · AI/ML

Hire Me