Skip to Content
LLM completions 🚧

LLM completions

JSONLines

LLM completions can be streamed using JSON Lines  format, which is a convenient way to send a stream of JSON objects over HTTP. On the back-end streaming is implemented with Generator  functions, which allows to yield data as it becomes available. Read more about streaming with Vovk.ts.

src/modules/llm/LlmController.ts
import { type VovkRequest, post, prefix, operation, HttpException, HttpStatus } from 'vovk'; import OpenAI from 'openai'; @prefix('openai') export default class OpenAiController { @operation({ summary: 'Create a chat completion', description: 'Create a chat completion using OpenAI and yield the response', }) @post('chat') static async *createChatCompletion( req: VovkRequest<{ messages: OpenAI.Chat.Completions.ChatCompletionMessageParam[] }> ) { const { messages } = await req.json(); const openai = new OpenAI(); yield* await openai.chat.completions.create({ messages, model: 'gpt-5-nano', stream: true, }); } }
// ... using completion = await OpenAiRPC.createChatCompletion({ body: { messages: [...messages, userMessage] }, }); // ...

See example  for more details.

Vercel AI SDK

The AI SDK is the TypeScript toolkit designed to help developers build AI-powered applications with React, Next.js, Vue, Svelte, Node.js, and more.

Read more about the Vercel AI SDK .

Vovk.ts supports every built-in feature of Next.js and, as a result, it can be used with the Vercel AI SDK returning Response object from toDataStreamResponse function with no additional changes.

src/modules/ai-sdk/AiSdkController.ts
import { post, prefix, operation, type VovkRequest } from 'vovk'; import { streamText, convertToModelMessages, type UIMessage } from 'ai'; import { openai } from '@ai-sdk/openai'; @prefix('ai-sdk') export default class AiSdkController { @operation({ summary: 'Vercel AI SDK', description: 'Uses [@ai-sdk/openai](https://www.npmjs.com/package/@ai-sdk/openai) and ai packages to chat with an AI model', }) @post('chat') static async chat(req: VovkRequest<{ messages: UIMessage[] }>) { const { messages } = await req.json(); const LIMIT = 5; if (messages.filter(({ role }) => role === 'user').length > LIMIT) { throw new HttpException(HttpStatus.BAD_REQUEST, `You can only send ${LIMIT} messages at a time`); } return streamText({ model: openai('gpt-5-nano'), system: 'You are a helpful assistant.', messages: convertToModelMessages(messages), }).toUIMessageStreamResponse(); } }

On the client-side, you can @ai-sdk/react package to interact with the endpoint and build a chat interface.

'use client'; import { useChat } from '@ai-sdk/react'; import { DefaultChatTransport } from 'ai'; import { useState } from 'react'; export default function Page() { const [input, setInput] = useState(''); const { messages, sendMessage, error, status } = useChat({ transport: new DefaultChatTransport({ api: '/api/ai-sdk/chat', }), }); const handleSubmit = (e: React.FormEvent) => { e.preventDefault(); if (input.trim()) { sendMessage({ text: input }); setInput(''); } }; return ( <form onSubmit={handleSubmit}> {messages.map((message) => ( <div key={message.id}> {message.role === 'assistant' ? '🤖' : '👤'}{' '} {message.parts.map((part, partIndex) => ( <span key={partIndex}>{part.type === 'text' ? part.text : ''}</span> ))} </div> ))} {error && <div>❌ {error.message}</div>} <div className="input-group"> <input type="text" placeholder="Send a message..." value={input} onChange={(e) => setInput(e.target.value)} /> <button>Send</button> </div> </form> ); }
Last updated on