Skip to Content
๐Ÿง  LLM Completions ๐Ÿšง

LLM Completions

JSON Lines

LLM completions can be streamed in the JSON Linesย  formatโ€”a convenient way to send a sequence of JSON objects over HTTP. On the backend, use Generatorย  functions to yield data as it becomes available. With the OpenAI API (and other completion APIs), you can delegate the iterable using the yield* syntax.

Learn more about the JSONLines Response.

src/modules/llm/LlmController.ts
import { post, prefix, operation, type VovkRequest } from 'vovk'; import OpenAI from 'openai'; @prefix('openai') export default class OpenAiController { @operation({ summary: 'Create a chat completion', }) @post('chat') static async *createChatCompletion( req: VovkRequest<{ messages: OpenAI.Chat.Completions.ChatCompletionMessageParam[] }> ) { const { messages } = await req.json(); const openai = new OpenAI(); yield* await openai.chat.completions.create({ messages, model: 'gpt-5-nano', stream: true, }); } }

On the client, you can consume the stream with a disposableย  async iteratorย :

// ... using completion = await OpenAiRPC.createChatCompletion({ body: { messages: [...messages, userMessage] }, }); for await (const part of completion) { // ... }

View live example on examples.vovk.dev ยปย 

Vercel AI SDK

The AI SDK is the TypeScript toolkit designed to help developers build AI-powered applications with React, Next.js, Vue, Svelte, Node.js, and more. Read more about the Vercel AI SDKย .

Vovk.ts supports all builtโ€‘in Next.js features and therefore works seamlessly with the Vercel AI SDK. Simply return the Response produced by the stream helper (for example, toUIMessageStreamResponse)โ€”no additional wiring required.

src/modules/ai-sdk/AiSdkController.ts
import { post, prefix, operation, type VovkRequest } from 'vovk'; import { streamText, convertToModelMessages, type UIMessage } from 'ai'; import { openai } from '@ai-sdk/openai'; @prefix('ai-sdk') export default class AiSdkController { @operation({ summary: 'Vercel AI SDK', }) @post('chat') static async chat(req: VovkRequest<{ messages: UIMessage[] }>) { const { messages } = await req.json(); return streamText({ model: openai('gpt-5-nano'), system: 'You are a helpful assistant.', messages: convertToModelMessages(messages), }).toUIMessageStreamResponse(); } }

On the client, use the @ai-sdk/reactย  package to call the endpoint and build a chat interface following the SDKโ€™s recommended patterns.

'use client'; import { useChat } from '@ai-sdk/react'; import { DefaultChatTransport } from 'ai'; import { useState } from 'react'; export default function Page() { const [input, setInput] = useState(''); const { messages, sendMessage, error, status } = useChat({ transport: new DefaultChatTransport({ api: '/api/ai-sdk/chat', }), }); const handleSubmit = (e: React.FormEvent) => { e.preventDefault(); if (input.trim()) { sendMessage({ text: input }); setInput(''); } }; return ( <form onSubmit={handleSubmit}> {messages.map((message) => ( <div key={message.id}> {message.role === 'assistant' ? '๐Ÿค–' : '๐Ÿ‘ค'}{' '} {message.parts.map((part, partIndex) => ( <span key={partIndex}>{part.type === 'text' ? part.text : ''}</span> ))} </div> ))} {error && <div>โŒ {error.message}</div>} <div className="input-group"> <input type="text" placeholder="Send a message..." value={input} onChange={(e) => setInput(e.target.value)} /> <button>Send</button> </div> </form> ); }

View live example on examples.vovk.dev ยปย 

Last updated on