TODO: deletions need to be explicit TODO: Add ticker
Real-time polling (experimental)
In the previous article, we covered how to use the OpenAPI Real-time API to deliver instant UI updates while the user interacts with the app. The LLM functions created with createLLMTools ran in the browser, where the AI performed HTTP requests to the server using RPC modules.
On this page, we explain how to use database polling to receive updates triggered by other users or third-party services (such as MCP), keeping the UI in sync with the database. To ensure the app can be deployed to any hosting provider, we avoid non-HTTP protocols such as WebSockets and instead use HTTP polling powered by JSONLines.
Redis DB as event bus
While we could poll the main Postgres database for changes, that approach is inefficient. Instead, we use Redis as an event bus. Whenever the main database changes, we write a small event to Redis. Our polling service reads these events every second and relays them to clients with the app open.
Because we use Prisma as our ORM, we rely on Prisma Extensions to hook into database operations and write events to Redis. This is where the DatabaseService mentioned in the previous article comes into play.
- The
getClientmethod callsDatabaseEventsService.beginEmitting()to start emitting events.beginEmittingruns asetIntervalthat connects to Redis and periodically checks for new events. When a new event is found, it emits it via mitt . prisma.$extendshooks into all Prisma model operations, determines whether an operation modifies data, and if so callsawait DatabaseEventsService.createChanges([change])to persist a change entry in Redis. The change captures creates, updates, and deletions:
export type DBChange = {
id: string;
entityType: EntityType;
date: string;
type: "create" | "update" | "delete";
};The date field indicates when the change occurred.
- For
createandupdateoperations, it uses theupdatedAtDB field (Important: all write operations must set this field). - For
deleteoperations, it uses the current time.
The delete operation also adds an __isDeleted property. The front end checks this property to hide the deleted entity by setting enumerable: false on the entity registry item (see the previous article).
Operations like find... and count do not trigger changes and are passed through as-is.
In addition to beginEmitting and createChanges, DatabaseEventsService provides a connect method and an emitter (a mitt instance). These are used by the polling service (DatabasePollService, discussed next) to be notified about new events.
Polling controller and service
With Redis change entries and the change emitter in place, we can implement a polling endpoint that streams updates to clients in real time. The DatabasePollController exposes a single JSONLines endpoint, and DatabasePollService uses a JSONLinesResponse instance (received from the controller) to send data to clients. The service closes the connection safely after 30 seconds, so clients should reconnect.
- When a
deleteDB change is emitted viaDatabaseEventsService.emitter, the service sends an event withid,entityType, and__isDeleted: true. The front end uses__isDeletedto hide the entity by making it non-enumerable in the registry. - When an
updateorcreatechange is emitted, the service fetches the full entity from Postgres (since Redis stores only metadata) and sends it to clients.
Client-side logic
On the client side (for example, in a React component), call DatabasePollRPC.poll() to receive a stream of database events. As with any JSONLines RPC method, it returns an async iterable that you can consume in a for await loop. Because the server may close the connection or a network error may occur, wrap the logic in a retry loop and avoid reconnecting on AbortError. Since the fetcher is already configured, the loop body can be empty—you do not need to handle data manually.
Front-end code, besides the polling logic, also includes on/off toggle state saved to localStorage, so the user can enable or disable polling as needed.
const [isPollingEnabled, setIsPollingEnabled] = useState(false);
const pollingAbortControllerRef = useRef<AbortController | null>(null);
useEffect(() => {
const isEnabled = localStorage.getItem("isPollingEnabled");
setIsPollingEnabled(isEnabled === "true");
}, []);
useEffect(() => {
localStorage.setItem("isPollingEnabled", isPollingEnabled.toString());
async function poll(retries = 0) {
if (!isPollingEnabled) {
pollingAbortControllerRef.current?.abort();
return;
}
try {
while (true) {
console.log("START POLLING");
const iterable = await DatabasePollRPC.poll();
pollingAbortControllerRef.current = iterable.abortController;
for await (const iteration of iterable) {
console.log("New DB update:", iteration);
}
}
} catch (error) {
if (
retries < 5 &&
(error as Error & { cause?: Error }).cause?.name !== "AbortError"
) {
console.error("Polling failed, retrying...", error);
await new Promise((resolve) => setTimeout(resolve, 2000));
return poll(retries + 1);
}
}
}
void poll();
return () => {
pollingAbortControllerRef.current?.abort();
};
}, [isPollingEnabled]);Bonus: Telegram bot (unstable)
As an example of a third-party source of database changes, see the TelegramService and the accompanying TelegramController . It accepts text or voice messages, performs voice-to-text transcription if needed using the OpenAI Whisper API, and uses the same createLLMTools function on the server:
const { tools } = createLLMTools({
modules: {
UserController,
TaskController,
},
});The Telegram API library is implemented with OpenAPI Mixins and used as a TelegramRPC module to call Telegram API methods.
// @ts-check
/** @type {import('vovk').VovkConfig} */
const config = {
// ...
outputConfig: {
// ...
segments: {
telegram: {
openAPIMixin: {
source: {
url: "https://raw.githubusercontent.com/sys-001/telegram-bot-api-versions/refs/heads/main/files/openapi/yaml/v183.yaml",
fallback: ".openapi-cache/telegram.yaml",
},
getModuleName: "TelegramRPC",
getMethodName: ({ path }) => path.replace(/^\//, ""),
errorMessageKey: "description",
},
},
},
},
};
export default config;The TelegramService class handles interaction with the Telegram API and generates AI responses using the Vercel AI SDK.
import OpenAI from "openai";
import { TelegramRPC } from "vovk-client";
const openai = new OpenAI();
export default class TelegramService {
static get apiRoot() {
const TELEGRAM_BOT_TOKEN = process.env.TELEGRAM_BOT_TOKEN;
if (!TELEGRAM_BOT_TOKEN) {
throw new Error("Missing TELEGRAM_BOT_TOKEN environment variable");
}
return `https://api.telegram.org/bot${TELEGRAM_BOT_TOKEN}`;
}
// ...
private static async generateAIResponse(
chatId: number,
userMessage: string,
systemPrompt: string,
): Promise<{ botResponse: string; messages: ModelMessage[] }> {
// Get chat history
const history = await this.getChatHistory(chatId);
const messages = [
...this.formatHistoryForVercelAI(history),
{ role: "user", content: userMessage } as const,
];
const { tools } = createLLMTools({
modules: {
UserController,
TaskController,
},
});
// Generate a response using Vercel AI SDK
const { text } = await generateText({
model: vercelOpenAI("gpt-5"),
system: systemPrompt,
messages,
stopWhen: stepCountIs(16),
tools: {
...Object.fromEntries(
tools.map(({ name, execute, description, parameters }) => [
name,
tool<KnownAny, KnownAny>({
execute,
description,
inputSchema: jsonSchema(parameters as KnownAny),
}),
]),
),
},
});
const botResponse = text || "I couldn't generate a response.";
// Add user message to history
await this.addToHistory(chatId, "user", userMessage);
// Add assistant response to history
await this.addToHistory(chatId, "assistant", botResponse);
messages.push({
role: "assistant",
content: botResponse,
});
return { botResponse, messages };
}
private static async sendTextMessage(
chatId: number,
text: string,
): Promise<void> {
await TelegramRPC.sendMessage({
body: {
chat_id: chatId,
text: text,
parse_mode: "html",
},
apiRoot: this.apiRoot,
});
}
private static async sendVoiceMessage(
chatId: number,
text: string,
): Promise<void> {
try {
// Generate speech from text using OpenAI TTS
const speechResponse = await openai.audio.speech.create({
model: "tts-1",
voice: "alloy",
input: text,
response_format: "opus",
});
// Convert the response to a Buffer
const voiceBuffer = Buffer.from(await speechResponse.arrayBuffer());
const formData = new FormData();
formData.append("chat_id", String(chatId));
formData.append(
"voice",
new Blob([voiceBuffer], { type: "audio/ogg" }),
"voice.ogg",
);
// Send the voice message
await TelegramRPC.sendVoice({
body: formData,
apiRoot: this.apiRoot,
});
} catch (error) {
console.error("Error generating voice message:", error);
// Fallback to text message if voice generation fails
await this.sendTextMessage(chatId, text);
}
}
// ...
}