LLM Completions
JSON Lines
LLM completions can be streamed in the JSON Lines format—a convenient way to send a sequence of JSON objects over HTTP. On the backend, use Generator functions to yield data as it becomes available. With the OpenAI API (and other completion APIs), you can delegate the iterable using the yield* syntax.
Learn more about the JSONLines Response.
import { post, prefix, operation, type VovkRequest } from 'vovk';
import OpenAI from 'openai';
@prefix('openai')
export default class OpenAiController {
@operation({
summary: 'Create a chat completion',
})
@post('chat')
static async *createChatCompletion(
req: VovkRequest<{ messages: OpenAI.Chat.Completions.ChatCompletionMessageParam[] }>
) {
const { messages } = await req.json();
const openai = new OpenAI();
yield* await openai.chat.completions.create({
messages,
model: 'gpt-5-nano',
stream: true,
});
}
}On the client, you can consume the stream with a disposable async iterator :
// ...
using completion = await OpenAiRPC.createChatCompletion({
body: { messages: [...messages, userMessage] },
});
for await (const part of completion) {
// ...
}See the live example for details.
Vercel AI SDK
The AI SDK is the TypeScript toolkit designed to help developers build AI-powered applications with React, Next.js, Vue, Svelte, Node.js, and more. Read more about the Vercel AI SDK .
Vovk.ts supports all built‑in Next.js features and therefore works seamlessly with the Vercel AI SDK. Simply return the Response produced by the stream helper (for example, toUIMessageStreamResponse)—no additional wiring required.
import { post, prefix, operation, type VovkRequest } from 'vovk';
import { streamText, convertToModelMessages, type UIMessage } from 'ai';
import { openai } from '@ai-sdk/openai';
@prefix('ai-sdk')
export default class AiSdkController {
@operation({
summary: 'Vercel AI SDK',
})
@post('chat')
static async chat(req: VovkRequest<{ messages: UIMessage[] }>) {
const { messages } = await req.json();
return streamText({
model: openai('gpt-5-nano'),
system: 'You are a helpful assistant.',
messages: convertToModelMessages(messages),
}).toUIMessageStreamResponse();
}
}On the client, use the @ai-sdk/react package to call the endpoint and build a chat interface following the SDK’s recommended patterns.
'use client';
import { useChat } from '@ai-sdk/react';
import { DefaultChatTransport } from 'ai';
import { useState } from 'react';
export default function Page() {
const [input, setInput] = useState('');
const { messages, sendMessage, error, status } = useChat({
transport: new DefaultChatTransport({
api: '/api/ai-sdk/chat',
}),
});
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (input.trim()) {
sendMessage({ text: input });
setInput('');
}
};
return (
<form onSubmit={handleSubmit}>
{messages.map((message) => (
<div key={message.id}>
{message.role === 'assistant' ? '🤖' : '👤'}{' '}
{message.parts.map((part, partIndex) => (
<span key={partIndex}>{part.type === 'text' ? part.text : ''}</span>
))}
</div>
))}
{error && <div>❌ {error.message}</div>}
<div className="input-group">
<input type="text" placeholder="Send a message..." value={input} onChange={(e) => setInput(e.target.value)} />
<button>Send</button>
</div>
</form>
);
}See the live example for details.