Text Chat AI Interface
In the previous articles, we set up the backend and frontend to automatically synchronize component state with backend data, independent of the data-fetching method, while the user interacts with the app via UI elements. On this page we’re going to make the UI AI-powered, allowing users to interact with the application using natural language.
We’re going to set up a text AI chat via the AI SDK , adding function-calling capabilities and deriving AI tools from the backend controllers via the deriveTools function.
Backend Setup
For the backend setup, we need to create a procedure powered by the AI SDK, adding tools and stopWhen options to the streamText function.
Because the procedures already follow the rules of locally called procedures—their handlers use only the vovk property of the request, for example async ({ vovk }) => UserService.createUser(await vovk.body()) (see the API Endpoints page)—we can use the deriveTools function to create AI tools from the controllers and call them in the current backend context without performing HTTP requests.
import { deriveTools, post, prefix, operation, type VovkRequest } from "vovk";
import {
convertToModelMessages,
jsonSchema,
stepCountIs,
streamText,
tool,
type UIMessage,
} from "ai";
import { openai } from "@ai-sdk/openai";
import { sessionGuard } from "@/decorators/sessionGuard";
import UserController from "../user/UserController";
import TaskController from "../task/TaskController";
@prefix("ai-sdk")
export default class AiSdkController {
@operation({
summary: "Function Calling",
description:
"Uses [@ai-sdk/openai](https://www.npmjs.com/package/@ai-sdk/openai) and ai packages to call UserController and TaskController functions based on the provided messages.",
})
@post("function-calling")
@sessionGuard()
static async functionCalling(req: VovkRequest<{ messages: UIMessage[] }>) {
const { messages } = await req.json();
const { tools } = deriveTools({
modules: {
UserController,
TaskController,
},
});
return streamText({
model: openai("gpt-5"),
system: "You execute functions sequentially, one by one.",
messages: await convertToModelMessages(messages),
tools: Object.fromEntries(
tools.map(({ name, execute, description, parameters }) => [
name,
tool({
execute,
description,
inputSchema: jsonSchema(parameters),
}),
]),
),
stopWhen: stepCountIs(16),
onError: (e) => console.error("streamText error", e),
onFinish: ({ finishReason, toolCalls }) => {
if (finishReason === "tool-calls") {
console.log("Tool calls finished", toolCalls);
}
},
}).toUIMessageStreamResponse();
}
}The code above is fetched from GitHub repository.
The resulting endpoint is served at /api/ai-sdk/tools.
Frontend Setup
On the frontend we’re going to use the AI SDK, represented by the ai and @ai-sdk/react packages, as well as the AI Elements library. AI Elements provides pre-built React components for building AI-powered user interfaces, built on top of shadcn/ui .
'use client';
// ...
import { useChat } from '@ai-sdk/react';
import { useState } from 'react';
import { DefaultChatTransport } from 'ai';
import { AiSdkRPC } from 'vovk-client';
import { Conversation, ConversationContent, ConversationEmptyState } from '@/components/ai-elements/conversation';
import { useRegistry } from '@/hooks/useRegistry';
import useParseSDKToolCallOutputs from '@/hooks/useParseSDKToolCallOutputs';
export function ExpandableChatDemo() {
const [input, setInput] = useState('');
const { messages, sendMessage, status } = useChat({
transport: new DefaultChatTransport({
api: AiSdkRPC.functionCalling.getURL(), // or "/api/ai-sdk/tools",
}),
onToolCall: (toolCall) => {
console.log('Tool call initiated:', toolCall);
},
});
const handleSubmit = (e: React.FormEvent) => {
// ...
};
useParseSDKToolCallOutputs(messages);
return (
// ...
<Conversation>
<ConversationContent>{/* ... */}</ConversationContent>
</Conversation>
// ...
);
}Check the full code for the component here
The key part of the code is the useParseSDKToolCallOutputs hook, which extracts tool call outputs from assistant messages and passes them to the registry’s parse method. The registry processes the results and triggers UI updates accordingly. The hook also ensures that each tool call output is parsed only once by keeping track of parsed tool call IDs in a Set.
import useRegistry from "@/hooks/useRegistry";
import { ToolUIPart, UIMessage } from "ai";
import { useEffect, useRef } from "react";
export default function useParseSDKToolCallOutputs(messages: UIMessage[]) {
const parsedToolCallIdsSetRef = useRef<Set<string>>(new Set());
useEffect(() => {
const partsToParse = messages.flatMap((msg) =>
msg.parts.filter((part) => {
return (
msg.role === "assistant" &&
part.type.startsWith("tool-") &&
(part as ToolUIPart).state === "output-available" &&
"toolCallId" in part &&
!parsedToolCallIdsSetRef.current.has(part.toolCallId)
);
}),
) as ToolUIPart[];
partsToParse.forEach((part) =>
parsedToolCallIdsSetRef.current.add(part.toolCallId),
);
if (partsToParse.length) {
useRegistry.getState().parse(partsToParse.map((part) => part.output));
}
}, [messages]);
}The code above is fetched from GitHub repository.
Without optimizations, the code can be reduced to this small snippet:
// ...
useEffect(() => {
useRegistry.getState().parse(messages);
}, [messages]);
// ...That’s it: now you have a fully functional AI text chat interface that can call your backend functions and update the UI based on the results. The procedures return updated data that includes id and entityType fields, as well as the __isDeleted field for soft deletions.