Turn Your Back-End into an AI Agent
Vovk.ts provides LLM function calling capabilities, turning route handlers into functions callable by AI. This enables more interactive user experiences, including interactions via a text chat interface, a real-time voice interface, or MCP.
While many AI libraries and APIs focus on how to use function calling, Vovk.ts focuses on what should be callable.
This feature is implemented by a zero-dependency function, createLLMTools, imported from "vovk". It produces a tools array that can be mapped to function-calling inputs for any LLM library. Each tool, converted from a controller or RPC handler, implements the VovkLLMTool interface with the following properties:
type: the string literal"function".name: the function name in the form${moduleName}_${handlerName}(module name + underscore + method name).description: a description derived from OpenAPI metadata, concatenating thesummaryanddescriptionfields, or overridden by a customx-tool-description. If neither is provided,createLLMToolsignores the function. In other words, using the@operationdecorator on controller methods is required for this feature.parameters: a JSON Schema object describingbody,query, andparams, generated automatically by the validation library.execute: a JavaScript function to be called when the LLM invokes the tool.
createLLMTools accepts an options object with:
modules: a record of modules (explained below).caller: optional. A function that defines how an RPC method or controller’s callable method should run to produceexecute. Intended for advanced scenarios.onExecute: optional. Called whenexecutecompletes successfully.onError: optional. Called when anexecutecall fails.meta: metadata passed to each controller/RPC method. Can include any data you want available on the back end.resultFormatter: optional. Formats the result returned to the LLM. By default, returns the raw result. Use"mcp"to format according to MCP.
Each modules entry is either an RPC module or a controller. Controllers are functions with pre-populated features such as schema implementing VovkHandlerSchema, which includes the operationObject populated by the @operation decorator.
createLLMTools distinguishes RPC module methods from controller methods by the isRPC property. If isRPC: true is present, the method is treated as an RPC method. Otherwise, it is treated as a controller method that exposes fn (see callable controller methods), mirroring the RPC function signature to allow direct invocation without HTTP.
import { createLLMTools } from 'vovk';
import { PostRPC } from 'vovk-client';
import UserController from '../user/UserController';
const { tools } = createLLMTools({
modules: {
PostRPC,
UserController,
},
onExecute: (tool, result) => {
console.log(`Tool ${tool.name} executed successfully with result:`, result);
},
onError: (tool, error) => {
console.error(`Tool ${tool.name} execution failed with error:`, error);
},
});In this example, PostRPC is an RPC module whose methods issue HTTP requests, while UserController is a controller exposing callable methods that can be used like regular functions outside of an HTTP request. This enables function calling on the server (for direct DB access) and on the client (via HTTP). The “client” can be a browser or any JavaScript environment that supports fetch, such as React Native, Node.js, or Edge Runtime.
Selecting Specific Methods
To include only certain methods from a module, use the pick/omit pattern from lodash or a similar utility.
import { createLLMTools } from 'vovk';
import { PostRPC } from 'vovk-client';
import { pick, omit } from 'lodash';
import UserController from '../user/UserController';
const { tools } = createLLMTools({
modules: {
PostRPC: pick(PostRPC, ['createPost', 'getPost']),
UserController: omit(UserController, ['deleteUser']),
},
});The resulting tools include createPost and getPost from PostRPC, and all methods from UserController except deleteUser.
Authorizing RPC Calls
To add authorization, pass a module as a tuple: the module itself and an options object with an init extending RequestInit. This lets you add headers such as auth tokens.
import { createLLMTools } from 'vovk';
import { PostRPC } from 'vovk-client';
const { tools } = createLLMTools({
modules: {
PostRPC: [
PostRPC,
{
init: {
headers: {
Authorization: `Bearer ${process.env.AUTH_TOKEN}`,
},
},
},
],
},
});You can combine pick/omit with the init tuple to authorize and select methods at the same time:
import { createLLMTools } from 'vovk';
import { PostRPC } from 'vovk-client';
import { pick } from 'lodash';
const { tools } = createLLMTools({
modules: {
PostRPCAuthorized: [
pick(PostRPC, ['createPost']),
{
init: {
headers: {
Authorization: `Bearer ${process.env.AUTH_TOKEN}`,
},
},
},
],
PostRPCUnauthorized: pick(PostRPC, ['getPost']),
},
});Custom Operation Attributes
You can add custom attributes with the x-tool-* syntax to the OpenAPI operation in the @operation decorator, enabling AI-specific tooling metadata.
import { prefix, get, operation } from 'vovk';
@prefix('user')
export default class UserController {
@operation({
summary: 'Get user by ID',
description: 'Retrieves a user by their unique ID.',
'x-tool-disable': false,
'x-tool-description': 'Retrieves a user by their unique ID, including name and email. Also includes user roles and permissions that define what actions the user can perform within the system.',
'x-tool-successMessage': 'User retrieved successfully.',
'x-tool-errorMessage': 'Failed to retrieve user.',
'x-tool-includeResponse': true,
})
@get(':id')
static getUser() {
// ...
}
}x-tool-disable
Force-disables the function for LLM function calling, even if summary, description, or x-tool-description is present.
x-tool-description
Overrides the generated function description (normally derived from summary + description).
x-tool-successMessage
MCP-specific attribute sent to the LLM when the function executes successfully.
x-tool-errorMessage
MCP-specific attribute sent to the LLM when the function execution fails.
x-tool-includeResponse
MCP-specific attribute indicating whether the function response should be included in the message sent back to the LLM after successful execution. Defaults to true.
Third-Party APIs
Vovk.ts can generate RPC modules from third-party OpenAPI specs, letting you combine your back-end with external APIs in a single agent. See the Codegen guide.
import { createLLMTools } from 'vovk';
import { GithubIssuesRPC, TaskRPC } from 'vovk-client';
const { tools } = createLLMTools({
modules: {
GithubIssuesRPC, // 3rd-party API
TaskRPC, // your own back-end API
},
});Because the Vovk.ts CLI can serve as a standalone code generator (e.g., for NestJS), you can still use the generated RPC modules with createLLMTools.
Function Calling Example
You can implement function calling using raw AI APIs with JSON Lines. In most cases, including this example, the Vercel AI SDK provides a simple way to build a text chat with function calling.
Create LLM Endpoint
Create a controller and pass your RPC modules or controllers to createLLMTools as shown above.
npx vovk new controller aiSdk --emptyPaste the following into src/modules/ai-sdk/AiSdkController.ts, adjusting imports as needed:
import { createLLMTools, HttpException, HttpStatus, post, prefix, operation, type KnownAny, type VovkRequest } from 'vovk';
import { jsonSchema, streamText, tool, convertToModelMessages, type ModelMessage, type UIMessage } from 'ai';
import { openai } from '@ai-sdk/openai';
import { PostRPC } from 'vovk-client';
import UserController from '@/modules/user/UserController';
@prefix('ai-sdk')
export default class AiSdkController {
@operation({
summary: 'Vercel AI SDK with Function Calling',
description:
'Uses [@ai-sdk/openai](https://www.npmjs.com/package/@ai-sdk/openai) and ai packages to call a function',
})
@post('function-calling')
static async functionCalling(req: VovkRequest<{ messages: UIMessage[] }>) {
const { messages } = await req.json();
const { tools: llmTools } = createLLMTools({
modules: { UserController, PostRPC },
onExecute: (d) => console.log('Success', d),
onError: (e) => console.error('Error', e),
});
const tools = Object.fromEntries(
llmTools.map(({ name, execute, description, parameters }) => [
name,
tool<KnownAny, KnownAny>({
execute: async (args, { toolCallId }) => {
return execute(args, { toolCallId });
},
description,
inputSchema: jsonSchema(parameters as KnownAny),
}),
])
);
return streamText({
model: openai('gpt-5-nano'),
system:
'You are a helpful assistant. Always provide a clear confirmation message after executing any function. Explain what was done and what the results were after the user request is executed.',
messages: convertToModelMessages(messages),
tools,
onError: (e) => console.error('streamText error', e),
onFinish: ({ finishReason }) => {
if (finishReason === 'tool-calls') {
console.log('Tool calls finished');
}
},
}).toUIMessageStreamResponse();
}
}Here, the tools are mapped to Vercel AI SDK tool instances using jsonSchema. For other Node.js libraries, you can map them differently.
Create a Front-End Component
On the front end, create a component using the useChat hook:
'use client';
import { useChat } from '@ai-sdk/react';
import { DefaultChatTransport } from 'ai';
import { useState } from 'react';
export default function Page() {
const [input, setInput] = useState('');
const { messages, sendMessage, error, status } = useChat({
transport: new DefaultChatTransport({
api: '/api/ai-sdk/function-calling',
}),
});
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (input.trim()) {
sendMessage({ text: input });
setInput('');
}
};
return (
<form onSubmit={handleSubmit}>
{messages.map((message) => (
<div key={message.id}>
{message.role === 'assistant' ? '🤖' : '👤'}{' '}
{message.parts.map((part, partIndex) => (
<span key={partIndex}>{part.type === 'text' ? part.text : ''}</span>
))}
</div>
))}
{error && <div>❌ {error.message}</div>}
<div className="input-group">
<input type="text" placeholder="Send a message..." value={input} onChange={(e) => setInput(e.target.value)} />
<button>Send</button>
</div>
</form>
);
}See the Vercel AI SDK docs and the LLM guide.
Roadmap
- ✨ Add a
routeroption tocreateLLMToolsto support hundreds of functions without hitting LLM function-calling limits. Routing can be implemented using vector search or other approaches.