Create LLM chat
JSONLines
LLM completions can be streamed using JSON Linesβ format, which is a convenient way to send a stream of JSON objects over HTTP. On the back-end streaming is implemented with Generatorβ functions, which allows to yield data as it becomes available.
Vercel AI SDK
The AI SDK is the TypeScript toolkit designed to help developers build AI-powered applications with React, Next.js, Vue, Svelte, Node.js, and more.
Read more about the Vercel AI SDKβ.
Vovk.ts supports every built-in feature of Next.js and, as a result, it can be used with the Vercel AI SDK returning Response
object from toDataStreamResponse
function with no additional changes.
src/modules/ai-sdk/AiSdkController.ts
import { post, prefix, type VovkRequest } from 'vovk';
import { streamText, type CoreMessage } from 'ai';
import { openai } from '@ai-sdk/openai';
@prefix('ai-sdk')
export default class AiSdkController {
@post('chat')
static async chat(req: VovkRequest<{ messages: CoreMessage[] }>) {
const { messages } = await req.json();
return streamText({
model: openai('gpt-4o-mini'),
system: 'You are a helpful assistant.',
messages,
}).toDataStreamResponse();
}
}
On the client-side, you can ai/react package to interact with the endpoint and build a chat interface.
'use client';
import { useChat } from 'ai/react';
export default function Page() {
const { messages, input, handleSubmit, handleInputChange, isLoading, error } = useChat({
// assuming that the "entryPoint" config option is "api" and the controller is used at the root segment
api: '/api/ai-sdk/chat',
});
return (
<form onSubmit={handleSubmit}>
{messages.map((message, index) => (
<div key={index}>
{message.role === 'assistant' ? 'π€' : 'π€'} {(message.content as string) || '...'}
</div>
))}
{error && <div>β {error.message}</div>}
<div className="input-group">
<input type="text" placeholder="Send a message..." value={input} onChange={handleInputChange} />
<button disabled={isLoading}>Send</button>
</div>
</form>
);
}
Last updated on