Live Demo #4: Time Series Databases in the Age of Coding Agents
AI coding agents are changing how developers build and deploy data infrastructure, and fast. In this session, we'll explore what that means for time series workloads in financial markets. Live on stage, we'll use Claude to deploy a full QuestDB integration in under two minutes: connecting to a live market data feed, ingesting into QuestDB, and spinning up a Grafana dashboard, all from a prompt. We'll then dive into the key concepts behind high-performance time series storage and walk through queries on live data, showing what becomes possible when your database is as fast to deploy as it is to query.
Speaker
Summary
QuestDB: Time Series Databases in the Age of Coding Agents
Speaker: Javier Ramirez, Developer Relations Lead, QuestDB Date: March 12, 2026 Event: Paris — Market Data x AI (Finteda / FactSet)
QuestDB Overview
QuestDB is an open-source time series database built specifically for finance. It also has users in energy and aerospace, but finance and market data is the largest use case. The open-source version has identical performance to the enterprise offering.
Use cases: Market data, pre- and post-trade analysis, payments, and many more.
Key features: - SQL with extensions — QuestDB speaks standard SQL with specialized extensions such as trading calendars (e.g., query data "only for yesterday in the New York Stock Exchange") - Multi-tier storage — Data can be stored locally in a native columnar format or in Parquet. Older partitions can be stored as Parquet while recent partitions use native storage, and both are queryable seamlessly. This integrates well with lakehouse architectures via object storage or NFS. - Specialized functions — Candles, OHLC charts, VWAP, down-sampling via `SAMPLE BY` - Horizon join — A specialized join for markout analysis (post-trade). For each trade, you can join with order book snapshots at intervals over a future time window (e.g., next 10 minutes at 30-second intervals, yielding 60 markout intervals per trade). This eliminates the need to export data to Python for calculation.
Live Demo: Performance
- Dataset: Order book snapshots table with ~4.5 billion rows (~5.5 days of data, ~800–830 million rows per day). Trades table: ~23 million rows per day. - Down-sampling demo: 15-minute candles for a full day in ~300 milliseconds. - Markout analysis demo: Horizon join of 23 million trades against 830 million order book rows — completed in ~4 seconds, without moving data out of the database.
Live Demo: Coding Agents + QuestDB
Starting from a completely empty folder, Javier gave Claude Code a single prompt: start QuestDB and Grafana with Docker, use the open-source CryptoFeed library to discover available symbols on the OKX exchange, ingest the data into QuestDB, and display OHLC bars, Bollinger bands, and VWAP on a Grafana dashboard.
Result: Claude autonomously installed dependencies, created the schema, ingested data, connected Grafana to QuestDB, wrote the necessary SQL queries, and produced a working live dashboard — all in about 3 minutes. Adding an RSI chart afterwards was a simple follow-up prompt.
Key point: A competent human who already knows all the tools would need a few hours. Someone new to QuestDB, Grafana, and CryptoFeed could take weeks.
Making Databases Agent-Friendly
Why can Claude work effectively with QuestDB despite it being a niche tool?
1. Standard SQL — Agents already know SQL; QuestDB's extensions build on top of it. 2. Open formats and protocols — RESTful API for creating tables and querying data, PostgreSQL wire protocol, Parquet support. No special MCP required for basic operations. 3. Agent-optimized documentation: - The website serves an index page in a lighter format (fewer tokens, faster, cheaper for agents). - Every documentation page is available in Markdown (no navigation menus, no HTML noise). - Bots naturally try to fetch Markdown versions by convention. - QuestDB is also rewriting the structure of documentation pages themselves — replacing the old "choose this path or that path" format with a more agent-readable layout that makes syntax and usage immediately clear, reducing agent confusion. 4. Claude Code skill — QuestDB provides a skill that loads the most relevant documentation locally, with a fallback to fetch from the web if needed.
Shift in Developer Relations
Javier described a fundamental shift: his community used to be humans (tens of questions per day on Slack and in forums). After adding AI assistants to both the website and Slack, there are now thousands of questions per day — but from agents, not humans. His job has changed from helping humans directly to helping agents help humans. This has completely changed how QuestDB writes documentation and designs APIs.
Q&A:
Q: How do you ensure agents parse the Markdown and not the HTML? A (Javier): Agents are lazy — they try to find Markdown by convention, so they naturally tend to look for it. Additionally, QuestDB's skill pre-loads the most relevant documentation locally so the agent doesn't need to go to the internet at all; if it can't find something locally, it falls back to the website. A (Moderator, adding): LLMs also tend to extract textual content and override structural noise from HTML, so that acts as a natural fallback.
Review Notes (changes made)
1. Horizon join: Added the "60 markout intervals per trade" detail, which is explicitly stated in the transcript.
2. Documentation restructuring: Added a fourth bullet under "Agent-optimized documentation" describing that QuestDB is rewriting the content structure of its documentation pages (replacing choose-your-path formats with agent-readable layouts). This was a distinct and substantive point in the transcript that the original summary omitted.
3. AI assistants on Slack: The original summary said "AI assistants on the website." The transcript makes clear that assistants were deployed on both Slack and the website. Corrected accordingly.
4. Q&A attribution: The original summary merged the moderator's observation ("LLMs naturally try to extract text and override structural noise") into Javier's answer. In the transcript this is clearly the moderator speaking. The Q&A section now correctly attributes each part to the right speaker.
Full Transcript
Javier Ramirez: So you probably cannot see anything. Oh yeah, it's not too bad. Cool. So I want to speak about time series databases in the age of coding agents. I work at QuestDB, it's a time series database. I don't know if any of you is familiar with QuestDB. It's fine not to be. Cool.
So since most of you have not, I want to dedicate hopefully just three minutes to give you a super high overview of what's QuestDB and see it in action, and then I'm going to go to the AI side of things. I don't have slides, all I have is URLs and demos and lots of opportunities to fail, which is always fine.
So QuestDB is an open-source database. You can use it totally for free. If you want to pay, we have an enterprise offering, but you can really use it for free. And performance is exactly the same in open source. We are built specifically for finance. We also have some users in energy and aerospace, but finance and market data is the largest user base we have.
We are a time series database. So basically you can use QuestDB for market data, pre- and post-trade analysis, payments, a lot of different use cases. So we think we are fast. Kind of fast. To show you if we are or not, I have some data behind the scenes. I actually have some data from other days, and also I'm ingesting data into an instance. Right now I have an order book snapshots table which is not super large — it's 4.5 billion rows. It's about five and a half days of data, something like that. I have some much smaller tables, just a few hundred million rows, but hopefully good enough for a demo. I know when we want to use 20 terabytes we can run on that, but this is running on AWS and we're a startup so I didn't want to have a huge disk. But hopefully — one day of data in this dataset is about 800 million rows, 830 million rows, which is not too bad.
And as you can see, something you can notice — let me make this bigger — we speak SQL. So if you want to query the data, you probably already know SQL, or if you don't, the coding agents know SQL, which is more important to me. So we speak SQL but with some extensions. We have things like "in yesterday," or if you want to work with the trading hours of some markets, we have calendars. So I can say I want to get data for this symbol but only for yesterday in the New York Stock Exchange. We have those kind of nice things to help you work with that.
We are a multi-tier database. The data can be stored locally in a native columnar format or also in Parquet. So in this table for example, my older partitions are stored as Parquet, the most recent partitions are stored in native storage, but I can still query data across both the older and the new partitions. That plays into what we've been talking today about the lakehouse and so on. You can integrate QuestDB into your lakehouse. We just read from object storage or NFS and it just works.
Two more things and then I go to the AI side of things. We have extensions — candles, OHLC charts. You can use SAMPLE BY. So this is down-sampling all the data for today into 15-minute candles in 300 milliseconds, which is not too bad.
And then we have specialized things like markout analysis for post-trade. If you are not familiar with that, basically you want to compare each trade with different points in time, maybe before and after the trade. Typically if you want to do that kind of thing, you need to get the data out of the database and into Python, and just moving the data is a hassle, and then you need to calculate — it can take hours to do something like that. We wanted to implement that in the database, and we believe it's a bit faster than hours.
So this is a specialized type of join we have. We call it a horizon join. What I'm doing here: for each trade in my table, I want to be joining with the snapshots, with the order book, for the next 10 minutes after the trade happened, at intervals of 30 seconds. So each trade is going to have markout for 60 intervals in time. And it should be faster than a few hours, hopefully, because otherwise the demo wouldn't be that nice.
Come on, don't fail now. Four seconds. Not too bad. That's for a day of data, for yesterday. I'm cheating here because I told you a day of data for yesterday was 800 million rows, but actually I'm just checking the trades, which is only 23 million rows. This is real data, by the way. So 23 million, and then I'm joining with the other table which is 830 million per day. But yeah, four seconds without having to move data out of the database is not too bad.
So that's QuestDB. And that's what you do for a human. But humans don't really do things anymore — I mean, we just tell what to do. We have the expectation and then we have the agents, our minions, doing things for us. And I wanted to show you that, because that's how I see people working these days.
So what I wanted to show you — of course you are familiar with Claude. This is a completely empty folder. Let me make this bigger. I don't have anything here. Completely empty folder. I start Claude here and I have a prompt. I code very badly, especially when people are looking at me, so I want to copy and paste.
Audience Member: Okay, but that's also hard—
Javier Ramirez: —without the mouse. It's not as easy as it sounds.
Okay, so what I'm doing here: I'm telling Claude I want to start QuestDB and Grafana. Grafana, if you're not familiar with that, is a dashboarding tool like this one — this is a dashboard I have on the right with data. Grafana allows you to do charts like this. So I told Claude: I want you to start Grafana and QuestDB with Docker. I want you to use an open-source library called CryptoFeed that has data from different exchanges. I want to use CryptoFeed to find out how many symbols we have available on the OKX exchange. And I want you to ingest the data into QuestDB. I want to display some OHLC bars, Bollinger bands, and VWAP on a chart on Grafana.
I could do this myself, and until November I was doing this myself, and it was not fun. If you are a human, you have to do things like — first, how do I start? First you need to know things exist, and then how do I start QuestDB and Grafana? You need to know the ports and what not. Then you start running and you have this open-source library and you need to figure out the API, how to know which symbols you need, how to get all the symbols, how QuestDB works to create a table and ingest data. Then you have to connect to that, ingest the data. Once you have everything up and running, then you connect Grafana with QuestDB with the right host, user, password, or whatever. You figure out which queries you need to do — OHLC candles and VWAP and Bollinger bands. And then and only then you can start displaying the charts in Grafana, if you know how to do that.
If you knew how to do that — and this would be being super generous — a few hours of a competent person who can do all these things. If it's the first time you're using QuestDB and Grafana and the CryptoFeed library, it might be weeks, or you might just say, you know what, I'm not doing that.
So Claude has been working already for more than two minutes, which is a bit slower than I was thinking, but it already started QuestDB, started Grafana, has been creating and installing my dependencies here in a virtual environment for the libraries. And hopefully in a few moments it's running. The schema is set up first — and the schema is ready. Let me just show you here, the local database. So yeah, I have some tables here. Let me see if there is some data actually. No data yet. It says it found an error and it's fixing that. I can sympathize.
Three minutes already. Three minutes. It's like three million-odd tokens. Cool. We have something. It opens something. Admin, admin, super.
Oh, and now — I didn't do anything. Of course I'm telling my boss one week, and it's like yeah, living the life. But that's kind of the thing. Now I have here a lot of symbols — I don't know how many symbols — all the symbols I have on the OKX exchange. Ethereum and USD. And I have here a chart in which I have the VWAP and my candles. Let me go with maybe BTC, which should be funnier. Yeah. So I have here some data flowing in and some... yeah, depending on the symbol you get different things.
And once you have this, you can say things like: hey, could you please — oops, I told you I cannot type — could you please add RSI, the Relative Strength Index, chart. And yeah, I hope it can do it. So I'll show you in a second why it can do this, because this is an empty folder — I'm not using any magic here.
So yeah, it's going to add a chart with RSI. If I had to do it myself — the Relative Strength Index is kind of easy. You only have to get the data at regular intervals, calculate the gains and the losses, then compare with the previous intervals using the exponential moving average, and that's it. Doing that in SQL is not super hard, but it's not trivial. But I don't have to do it because you're already paying for Claude.
And that's basically it. It adds this new panel which is the RSI. And that's kind of the thing. The idea is: you need to have infrastructure like a database or whatever you're using that has the features — speed, ingestion, queries, features, whatever. But it's also important that that tool can integrate with the new way of doing things. And this is kind of the new way of doing things.
But why is Claude able to work with QuestDB? When I asked you, "do you know QuestDB," most of you don't, because we are a very niche tool. We are not Postgres. So Claude doesn't really know that much about QuestDB. But the thing is, when you develop a database, what you have to do is make things easy for the agents. And luckily we had some decisions that were already easy for the agents.
We speak SQL, which is very well known, so we have extensions but the base of SQL is there. We use open formats. We have a RESTful API that you can use to create tables and query data and so on. You don't need to have a special MCP — you can have an MCP if you want to do other things, but you don't need it. You can just speak to the API and get the data. We have open formats, we speak the Postgres wire protocol, we use Parquet. So the agents already know a lot of things.
But for the specialized things — if you're a human and you go to the QuestDB documentation, you see this. It's technical documentation, nothing too special. When you are an agent, you actually see this. So you see this: an agent comes to the website and sees everything in a nicer format. Nicer means fewer tokens. It's going to be cheaper, it's going to be faster. Here is basically the index of everything on the website. And then you want to go to any page — the markout query. I showed you earlier this markout analysis.
As a human, this is nice. It has colors, it has bold, it has titles, whatever, because I'm a human, I like the colors. But if I look at this page as a bot, this is what I get — by default you get a lot of HTML. You have a menu on the left and the right. I don't need these things. So something we also do: we accept the bots asking for Markdown directly. The whole website the bot can discover via the main sitemap, and absolutely every page is available in Markdown. It doesn't have any colors, it doesn't have the navigation menus or anything, because the bot is not navigating — it's just going directly to what it needs. So it's very fast for the agents to parse the information and get what they need.
And something we are doing — we are not changing how we write documentation. My job is to bridge the gap between the engineering team and the community. Until last year, my community was people. I'll be honest — you put me on Slack, I will spend hours on Slack every day and in the forum. And then last year we decided to put an AI assistant on the Slack and one on the site. And now we have thousands — literally thousands — of questions per day. In the past I had tens of questions per day; now I have thousands, but I don't see them. I mean, I see the analytics, I can see the conversations, but I don't interact with them.
So now my users are the agents. And what I have to do is be very nice with the agents so they can help the humans that are asking the agents. But the humans are not coming to me anymore. The human has the agent, the agent has the documentation. I have to help the agent to help the human. That's kind of the thing. And it changed completely how I work.
So documentation — in the past, most of the pages in QuestDB, we have this index which is very nice. You probably have seen this in other databases — you can go this path or that other path. That's a format that's not that easy for agents to understand. For any new pages and where we are reviewing — let me show you something here quickly. Yeah, horizon join. So now, for example, we are replacing all of the old format with this. As a human you can also read it, but for an agent, it's like: if you want to use this thing, this is the syntax, this is the other syntax. They were getting confused before. So what we are doing is not only changing how we integrate via APIs and so on, but really making the life of the agent super easy so they can know where to go, how to find things. And without any special context, without any MCP, without any prerequisites, they just go to the documentation, they can find the information, they know how to help humans, so I don't have to do it myself.
And this is basically what I showed you, and how I think databases will be moving these days to make AI more productive. Thank you.
Q&A
Moderator: Thanks a lot. Very engaging. And actually it's a very relevant one to the panel, just from another aspect, because we will be talking about how market data products and platforms need to adapt to the agent world. So it's not only — but a very vivid demo. I really loved that. Thank you a lot.
So we can still give Javier one question if somebody has it. Yeah, he has a question. Are you sure that the agents are going to parse the Markdown and not the HTML?
Javier Ramirez: Yeah, they try. Agents are lazy like I am, so they always try — now with LLMs it's a convention, so they try to go there and they always tend to find how to do things. Trying to find Markdown is something they try to do by default. So basically they're going to try that.
Moderator: But the other thing is, they try to extract information and they will override any noise from the structure with textual information. So it's like also a fallback.
Javier Ramirez: Something I can tell you — we actually have a skill which basically takes the most relevant documentation and puts it locally, so we don't do anything else. But with that you can have a lot of context without having to go to the internet back and forth. And the skill says: if you don't find this locally, then go to the website and find this information. That's pretty much it.
Moderator: Thanks a lot.