Panel Discussion 19:25 - 20:10 (45 min)

Panel Discussion

Thursday, September 25, 2025
Guidepoint Global, 675 6th Ave 2nd Floor, New York, NY 10011

Future of Quant Development in the Age of AI & AI Agents

Moderator & Panelists

Didier Rodrigues Lopes

Didier Rodrigues Lopes

Founder & CEO @ OpenBB

LinkedIn
Jason Strimpel

Jason Strimpel

Founder @ PyQuant News

LinkedIn
Stefan Jansen

Stefan Jansen

Founder & CEO @ Applied AI

LinkedIn
Michael Watson

Michael Watson

Founder @ Hedgineer

LinkedIn
Christos Koutsoyannis

Christos Koutsoyannis

Moderator

Chief Investment Officer @ Atlas Ridge Capital

LinkedIn
Kirk McKeown

Kirk McKeown

Co-Founder & CEO @ Carbon Arc

LinkedIn

Summary

TL;DR

Finance is shifting from UI-driven analytics to API-first, agentic workflows. Competitive advantage moves from model choice to data structure, interoperability, and robust evaluation. Licensing is evolving from annual contracts to pay-per-query insights. The scarce asset is domain expertise paired with AI fluency; the product is increasingly the evaluation framework that guarantees reliable outcomes.

AI is now part of the new social literacy. As Jason Strimpel puts it: "Sex, drugs, and AI"—a phrase that highlights how central AI has become to society and the importance of educating everyone about its cultural, ethical, and practical impact.


1) Adoption Shift: From Interfaces to APIs

- What changed: - Firms now want direct, programmatic access to derived insights (API/Snowflake/Databricks/SQL) instead of vendor UIs. - Why it matters: - Analysts can embed intelligence directly in workflows; latency and “waiting on the interface” disappear. - Actions: - Vendors: Ship API-first products, data shares, docs, SDKs, and reference pipelines. - Buyers: Prioritize vendors with stable schemas, SLAs, lineage, and observability hooks.

2) Agentic Enterprise & MCPs (Model Context Protocol)

- Trend: - Movement from dashboards to agentic workflows backed by MCP servers connecting multiple data sources. - Key insight: - The evaluation layer is the product—tests and ground truth that make outputs trustworthy. - Actions: - Build domain-specific eval suites that mirror real user tasks (inputs → outputs → acceptance criteria). - Treat agents as decision systems (tool access, guardrails, audit trails), not just query wrappers.

3) Frictions to Scale: People, Process, Platforms

- Human: - Compliance, legal, fear of job change slow adoption. - Process: - Unclear specs + rushed timelines create chaos and failed pilots. - Platform: - Persistent data silos; underinvestment in data structure and context. - Actions: - Establish cross-functional AI councils (Risk/Legal/IT/Data/Business). - Fund data modeling & governance (ontology, contracts, lineage) before app-layer bets.

4) Incentives & Data Monetization

- Shift: From 500 large buyers of “alt data” to millions of insight consumers when insight (not raw data) is tradable. - Model: Standardized compliance, attribution, and pay-per-insight pricing. - Compute: Charging for compute misaligns incentives; pricing will gravitate to context/insight value. - Actions

- Vendors: Offer granular, event-based pricing (entity × event × window). - Buyers: Build knowledge graphs to join vendor insights with internal context.

5) MLOps → LLMOps: Reliability Is a Discipline

- Reality: LLMs add stochasticity, prompt safety, and context versioning to already-hard MLOps. - Economics: Frontier models commoditize quickly; eval rigor becomes the durable moat. - Actions

- Implement continuous evaluation (goldens, regression tests, drift monitors). - Version prompts, context, tools like code; treat evals as unit tests for decisions.


6) Stack Positioning & Incentive Conflicts

- Conflict: Data vendors with profitable UIs resist true interoperability (open MCP clients). - Prediction: Ecosystem bifurcates into data-first vs workflow-first firms. - Actions

- Pick a side: Data business (neutral pipes) or workflow business (bring-your-own-data). - Publish interop roadmaps (MCP client support, adapter kits, ingestion contracts).

7) Licensing Disruption: From Annual to Pay-Per-Query

- Example: Single-query access to credit-card spend → query/insight pricing instead of annual licenses. - Value migration: From software licenses to expertise/ontology definition. - Actions

- Rebalance P&L: Monetize expert systems & ontologies; expect SaaS ARPU compression. - For buyers, negotiate elasticity (burst usage caps, per-insight tiers).

8) Skills & Human Capital

- Edge: Domain expertise × AI fluency beats either alone. - Guidance for talent: ~300 hours of hands-on agentic tooling to internalize capabilities/limits. - Actions

- Firms: Run AI fluency bootcamps; promote workflow design, context authoring, and tool orchestration. - Individuals: Practice end-to-end workflow articulation (inputs, tools, validations, outputs).

9) Culture & Ethics

- Observation: Maintain human spaces amid always-on agents/devices. - Message: Treat AI as a new social literacy (“sex, drugs, and AI”). - Actions

- Adopt privacy-by-design patterns; allow AI-free zones in collaboration spaces. - Add ethics checklists to solution reviews (recording, suggestions, data retention).


10) Legacy Modernization via LLMs

- Opportunity: Rapidly map undocumented estates (DBs, ETL, reports) into a knowledge graph (tables, columns, procedures, flows). - Outcome: Weeks, not months, to get lineage, dependencies, and modernization plans. - Actions

- Automate schema scans + proc parsing + report inventory (Power BI/Tableau/SSRS). - Index meeting transcripts + SME notes into the graph; keep it continuously refreshed.


11) Strategy: What Winners Will Do

1. Standardize data access and contracts (schemas, SLAs, lineage). 2. Operationalize evaluation as a first-class artifact (owned by the business, enforced by platform). 3. Align incentives (pricing on insights; open interop where it creates value). 4. Invest in people (hybrid skill paths, measurable fluency, workflow portfolios).

Closing Line

We’re in the zero inning. Models and compute will commoditize; data structure, evaluation rigor, and domain-aware workflows will define who captures the upside.

Full Transcript

Quant x AI Panel Discussion Transcript


Opening: Client Adoption & Agentic AI

Christos Koutsoyannis (Moderator): How do you see sort of the client base? How do you see people, the buy side, the sell side, making that jump? How do people use agentic AI or other tools at the moment?

Didier Rodrigues Lopes: Yeah, so I think I have quite a lot of insights because I can see this like firsthand. So when I go to these clients, like you know, we talk about what solutions they are using today and something and I want a really big blog post about this is that recently I started seeing this shift from like 2 years ago, like firms would pay for like AI typical startups and basically to access their shopping interface. Right? And basically the shopping interface would have some sort of like workflows, would be working on top of SEC or on top of data from like an SMP or something like that. But today there's been a very big shift from the shopping interface to actually API. And so today if you actually go to the website of most of these AI financial companies, pretty much all of them have an API tab that basically is the buy side firms asking, well, hold on a second. I don't really care about going to your interface and waiting for the workflow to happen. I just want this derived data directly into my feeds because I want to use that, my analysts want to use that, but they don't really want to wait. So it's less around the intelligence per se of the interface, but it's more about the derived insights that these startups can get versus others.

Christos Koutsoyannis: So Michael, you have this unique background of having worked for a large hedge fund and having interacted with smaller ones. I used that phrase once I the evaluations of the product. Can you explain and how do you see larger versus smaller firms sort of using technology today?

Michael Watson: Sure, I'll actually get on the first one and then the second one. One of the things that we only work with discretionary managers, so I know this is more of a quant focus, but all of our clients are exclusively discretionary. One of the reasons that is that you don't have to write code or software to be a discretionary manager. You never had that in a pedagogical path to learn how to write a SQL query. So you can get pretty far as a discretionary manager without ever having to write code or hire a cto, hire a developer internally. But then there's these kind of opportunities now that if you can do a little bit of that, you can get pretty far. And so like a lot of our clients, they might have a FactSet dashboard, they might have a visible alpha dashboard or an SMP dashboard for the first time. Asking for, hey, can I actually get access to API? Can I get the Snowflake data share? Can I get this on databricks? Because being able to write a SQL query and plug that into a MCP server or plug that into a tool that can help construct that query to be able to make sense of it is suddenly not as high of a barrier as it was previously. When you had to hire that data scientist, you had to hire that data engineer. And so the way that Didier is explaining that they're now asking for these APIs, the people that are asking for them are changing. They used to be either a discretionary manager that has a large technology workforce that can actually integrate with an API, or inherently a fund one. But you're seeing a lot of what would historically be non technical firms starting to ask for API or Snowflake data shares or Delta Lake shares because the barrier of entry for technology is getting closer and closer to zero. And so like you see people asking for data in a more sophisticated way and kind of like building on top of that, like whenever we will go in and we'll start building out like an agentic workflow or build out a custom MCP within a client's enterprise connected to their different data sources. Anyone that's gone through trials, they realize that like oh, this data looks good, but how does it actually work? You'll see a client that's non technical and they're digging for the first thing that they see wrong and you lose that trust as far as quickly as possible. And so a lot of these initiatives don't necessarily get off the ground or they're not as successful as they could be because of the psychology behind it. And a lot of that that we found is the inability to create testing frameworks or evaluations before you put it in front of a user that are actually replicating the realistic workflows they would otherwise be doing. So like for example, if you have an MCP server that's doing risk decomposition on a portfolio, you have a risk API portfolio analytics API and it's actually telling you what your market industry style exposures are behind an NCP server. Have you written all of the tests up front that you know, like hey, if I have this much Apple, this much Nvidia, this much Google, that it's going to give me a, let's say like $10 million in market exposure and like that. I have dozens and dozens and dozens of those evaluations that are built in to the system before you even deploy that. And what we're starting to realize is that the actual tests that you define, those suddenly become the most valuable piece because the NCP server is just lightweight paperware around a SQL query in context. The actual APIs that you're using, they're not actually like adding that much value. But if you can define what your tests are and your use cases and have those actually represent the actual actual use cases that you would use in the firm, the actual questions that the CFO would ask around the redemptions or the subscriptions for a specific Investor in Fund 2 in Q3 2024. And that's not something that's being run for the first time when he types that in. But you've already done the work ahead of time. Make sure that those tests are defined, the actual answers are in there. That's the product. That is the product. The SQL query that's been wrapped in an MCP server like any high school year could write that MCP server and create that same experience. The expertise in defining of what that response actually needs to have. That is actual value. And that's hard work. And I've come to the realization that the evaluation is the product in many of these use cases.

Christos Koutsoyannis: Amazing. There's such a convergence in sort of quantum fundamental quantitative research being able to do so much more in terms of sort of in depth and similarly fundamental research becoming more and more systematizable. The sort of data and infrastructure teams forecasting KPIs at Citadel and all the places. And what you said sort of resonates with me, sort of. That falls down to the individual level. You know, people who didn't code before are dipping their toes and trying it out. And I think that's amazing. Stefan, Kurt, do you want to sort of take this question as well? Where we are at the cutting edge of AI?


The State of AI: "Zero Inning"

Kirk McKeown: We're in the first inning, maybe the zero inning. From, from my perspective, what's really interesting is looking at the ecosystem forming on a real time basis. I think there are a couple things that are going on and the thing that we're trying to stay focused on every day is where is our competitive advantage? So we are in a bit of a mania. It might not be the Internet bubble. The Internet bubble actually had two real tales to it. One was the breakup of AT&T and the second was the rise of the Internet. And those two things drove massive overspend from a Capex perspective in the late 90s. I remember it, which is why I talk about it. I was working at that time, so I have that perspective. This one's pretty bad. And there's one of the things that gets lost in the mania is competitive advantage. Right. What is your competitive advantage? What is your moat and how do you retain it and how do you grow it and how you expand it. When we see the, you know, sort of the, this particularly on buy side and sell side, there are two things. Somebody's telling me something, right? We can hear you. It's fine. Yeah. So when we, when we see. Kirk McKeown: The sell side and the buy side. What we're seeing are still the emergence of real frictions that are largely human driven, you know, sort of in legal and compliance and in folks, frankly. There's a lot of fear in the system. There are a lot of people that are concerned about, you know, what their jobs are going to look like in three to five years, if not 18 months. And so we see real human friction in the builds. And then data silos are very meaningful even in some of the, you know, sort of most rigorous and technical funds. And getting all their data in, in the right place to talk to each other with context is a thing. And so that's one of the reasons I see it. It's sort of zero inning. The app layer is moving really fast. Compute is plentiful and coming on and data structure, which is where we're playing is still wildly nascent and frankly has been an afterthought up until pretty recently for us. And so that's where we see things sort of playing out. Stefan Jansen: Yeah, I can certainly emphasize the gap between what is possible and what early adopters are aware of and know what is the reality outside the company. So we work with a bunch of different enterprises, different sectors, not just trading and finance. And most are really at a loss to really be able to leverage the capabilities of the technology because there's evaluation piece. People are unable to find what they actually want to get out and how to measure the course. They're unable to get the specification right. So that's the input and the output. So you need to get, you need to be clear how you measure what comes out of the model is really what you want so that you can, you know, design it accordingly. The other piece is how do you make sure that you actually are asked the right question and make that right. So people are kind of, they don't have that frame of mind really yet and they kind of get pressure from above mostly to get this implemented really quickly. So I see a lot of chaos in a lot of companies where the C suite has, you know very optimistic expectations about what can be done in short time. And everybody in management is just scrambling to get it done and nobody has the skills to really implement that systematically. So that is kind of why we then get the surveys that, oh, the enterprise, the industry is, as soon as you go beyond the executive level, is struggling to see the value of AI. Well, precisely because of that. Right. And always overestimate it in the short run, underestimate it in the long run. Right. This adoption hurdle will fall eventually. So it's a very good time if you can insert yourself, learn the skills, and then help folks get over that hurdle because they need it. Didier Rodrigues Lopes: Right. Stefan Jansen: But they're not ready. There's a lot to learn, a lot to know beyond using ChatGPT and to summarize, text. So capabilities are there, but using them is another thing. Christos Koutsoyannis: The capabilities are growing so quickly, day by day. It's amazing to live through this period and see this new functionality emerge. And it's amazing how interdisciplinary it is. Like, you know, the projects we do in the fence, the projects we do in the hedge fund are basically the same toolset. When was this true? Before in life, I'd like to dive into the ecosystem a little bit and talk about incentives. So we talked a little bit about common ontology and thinking about how the ecosystem might be evolving. There are various efforts. You always mean sort of. I've been thinking about the centralized versus decentralized environment. Jason Strimpel: So. Christos Koutsoyannis: So there is this effort at the hedge fund level. This is at the level at the Bloomberg and FactSet kind of level, at the fintech level, as we saw today, where do you all see this. Didier Rodrigues Lopes: Evolving into? Christos Koutsoyannis: And who has an incentive to share their data or open up their infrastructure the most? Kirk McKeown: That's what we're going after. Christos Koutsoyannis: Right. Kirk McKeown: So at the end of the day, we see the ecosystem in sort of three buckets, models, chips, and data structure. We're attacking the data structure problem. When I worked on Wall Street, I was one of the bigger buyers of what people call exhaust data, alternative data. On the Street, I was buying 20 to 40 bucks a year of data. And that was everything from trade claims to health care claims to credit card to clickstream to satellite. You name it. I was buying was expensive, hard to work with. You needed engineering scientists to get it to the right place. That meant that the addressable market for these sellers was about 500 qualified buyers in the world. They were spending millions of dollars to buy this data, to turn it into actionable signal. And probably 70% of those buyers were on Wall Street. That Was really good for the guys that I worked for, the guys that Watson worked for, but it was really bad for everybody else. There's no liquidity traded like equities in the 1930s. Right. The view of Carbonarch is that if you take a markets based approach to data monetization and you smash down the cost of the insight, you move the insight instead of the asset, you can improve the qualified buyers from 500 players in the world to 5 million. Increase density and velocity of consumption, drive AB testing, drive research workflows and lower the center of gravity on the asset. That will expand the number of assets that want to come to market because they see an opportunity to turn an intangible asset on their balance sheet into a cash flowing asset like a mortgage, like electricity, like oil and you know, sort of you build the infrastructure, the piping and the tooling to get that to market. We go to market like we're Goldman Sachs, we're the counterparty. We standardize compliance across all the assets. We build a monetization layer, we manage it in graph, we push it spot buying. All you do program trading and then we sell block trades like the old fashioned way. In 1985, New York Stock Exchange 60% of their volumes was block trades. Today it's 8. We think the same thing is going to happen over the next five to ten years that happened on Wall street from 1985 to 2025. Flax Shoals came out in 7383, Rentech was formed. And since then every friction that has been in stock trading has been removed. Decimalization, paper went through electronic. You saw funds move servers closer to exchanges. The same thing's going to happen to the world exhaust data. And it needs to be an essential space because the ontological framework and the modular framework to get the data down to the insight has to be sort of scalable and it has to be low cost. So there needs to be a level of efficiency. The last thing I'll say is a lot of people get paid on COMPUTE in this space right now. They're going to have a real problem. You can't get paid on COMPUTE in this new environment because. Especially as analytics, because you're misaligned on moving data. Christos Koutsoyannis: Jason, what are your thoughts? Kirk McKeown: I don't have strong opinions. Christos Koutsoyannis: Strong opinions are good things—be controversial. What are your thoughts on MLOps, LLMOps? One of the biggest pain points. How do you react to that? Jason Strimpel: Yeah, I mean just, just to tag along with. Colleague said it's moving the data from one state to another. State and what we basically saw was a data ops platform in all sorts of different. What we're seeing is taking the infrastructure layer away to lower the barriers of entry to more people to get after the same type of activity like MLOps has been around for a very long time. The basic concept of MLOps is how do you get a model model trained into production and manage the drift of the quality of the model? Right. The quality of the model will decrease over time. Whether that's a factor of feature model in your trading algorithms or it's you're predicting something on the street. Eventually that model will erode. In finance we call it alpha. Alpha will be erode and you have to manage that direction. You train your models new data. The better quality data that goes into models, the more in insight will get out or an alpha using the language of the root here. And mlops is basically centered on that process. And it is a pain in the ass, let me tell you, to do it at scale, to do it well at scale in an enterprise setting. It's behavioral, it's technology, it's tooling, it's skills, it's all the things and it's very, very difficult because it's not normal SDLC. We've been putting software in production for 60 years. We've been putting ML into production. Very few people talk about 500 firms on the earth that do this well. There's probably a handful of firms on the earth that do this very well. LLMOPS is an entirely new breed forked from mlops. So now we're talking about whole new words and categories and classes and skills. How do you guard them right? Prompts coming in from external users, how do you make sure that the prompts coming out are adequate, safe, etc. Even eval. So evals are like unit tests for margin models. These models, the economics of these models. I'm going to dovetail here for a second. It's so interesting because the Frontier models, 8, 10 bucks per million tokens, the one version behind the frontier model are pennies. So the collapse in the economics from the frontier model is so drastic that there's very little need to stay on the frontier model. What this necessitates then is a very strong and rigorous eval framework so that you basically have the output you're looking for and then you can adjust the prompts depending on the models deploying. This is hard stuff and the tooling is just now starting to evolve and a lot of it is like local open source software that you plug into your Virtual environment or your Docker container and running a MacBook, try getting that into a production environment with 40,000 people. Very, very, very difficult. And because these models are probabilistic, stochastic, non deterministic like that, adds an entirely new dimension here of difficulty. So also no kidding, very hard stuff, very cutting edge stuff, very new stuff. Michael Watson: The tooling. Jason Strimpel: The tooling is just starting to come online. Even two years ago I was working at AWS in their generative AI services. My goodness, was it an awful experience trying to get some basic things running on bedrock that you could probably do now more easily. But even two years ago, even AWS was struggling with this. But critically important, critically important.

Christos Koutsoyannis: With the emergence of MCP servers. Kirk McKeown: How many see the data access and. Christos Koutsoyannis: Data ownership sort of environment, sort of the incentives structure controversial or not controversial? Very controversial. Didier Rodrigues Lopes: So I think that like big companies like Pak Sense or Kafiq, I think they're going to have a problem fundamentally because I think they are playing both games. So on one side they are data business, so which is clear, like an FCP server because you want data consumption. But on the other side they make like a lot of money at the interface level. Right. And so when you don't have the incentives to allow external data to come onto the workspace, when you have the data yourself, right. And so it's going to be like an interesting in the AI space at least because I feel like you kind of going to have to pick a side. This is either in the AI world, you either are like a data business and there's a ton of value on the data, maybe even the biggest value is on the data. And then you are on the MCP client, which means that basically you are the application layer, the infrastructure data interface where people actually get workflows done. But in that way then you need to abstract yourself away from being a data vendor. Because if you are a data vendor, then you don't have the right incentives to allow people to connect with other data onto your product because you have the data. So I think that that's going to be like a very interesting one. I know that like, you know these big firms are releasing MCP servers. I've not seen them releasing MCP clients onto the workspace. And that's why earlier I said I think that we are actually one of the first financial workspace that allows MCP5s. I want to see if they ever do because I feel it's going to be interesting. I feel that they might do it, but we like bore rails around which MCP servers are maybe starting at the point it's like okay, customer data we will allow you and just to do that reach from a customer perspective. But I don't see them allowing for instance fundamentals data one day at a time. Christos Koutsoyannis: Sense. The world of incentives is changing and it's changing in a lot of ways. Like Michael, how do you think about sort of the incentives in the sense of the licensing model for data licensing agreements are built for fundamental research and one year agreement. Kirk McKeown: How do you see that changing in. Christos Koutsoyannis: The world of LLMs and money? Jason Strimpel: Yeah, well I mean Jordan remarks that. Michael Watson: I mean one just on the data licensing side I think what was demoed. Jason Strimpel: I don't know if you were here. Michael Watson: Earlier for cork, but like that was an example of completely disrupting a data licensing model where you're able to get access to that credit card spend data. I'm not sure what credit card spend but paper, query paper insight. And that is a completely different model where you otherwise would have to sign a one year license for that data. So being able to just pay per consumption, I think it's really badass. I thought even the DDA and I were talking about the text to insight. I don't know if people noticed how that was designed, but it was actually using intelligence on the MCP server and not just doing a simple wrapper around a SQL or REST API. But I think that it's not just issues with licensing for data, it's going to be issues for licensing of anything. As the cost of creation of software continues to grow closer and closer to zero and more of that value accrual goes more to experts, people that can very clearly articulate what good looks like. And then the cost of being able to scale that expert or scale that intelligence is lower than it ever has been before. So I don't think it's just the data licensing models that are necessarily that are going to get disrupted. I think just general SaaS as a whole and more of that value, value accrual goes either to the energy producer, the model provider or the person that has true expertise, true domain knowledge and understanding and there's going to go a lot of value accrual to that. Dan actually talked about in our podcast a little bit where for example the guy that got acquired by scale, the app value of scale, AI by meta gets billion dollars or whatever. I think that that is something that's going to reverberate across industries where who's ever an expert in insurance or an expert in defining ontologies or an expert in understanding higher frequency trading can very clearly like go from an employer and develop that Expertise over like 10 years, go and very quickly articulate, hey, this is the actual best solution as it exists right now and not have to go and raise capital to hire people because those people are no longer a prerequisite to go and re implement that high frequency implementation. And they can actually do it themselves and scale it horizontally across the organization. But it's going to be difficult to be able to license out that software because that same person can easily be disruptible. And so finding business models where it looks like service and you get the service almost for free, you get you paid money for the, the expertise, but the software that you leave behind, the code, that's what we think right now is expensive. That's what right now you have to hire a bunch of people to write, hands on their keyboard to write. But as that cost goes down, companies willingness to pay for that on an ongoing license model goes down with it. But what they're willing to pay for that expert to come in and say what actually good looks like within your organization, I think that's actually going to skyrocket. So the value accrual will actually go from licensing. I think data is going to still be there. I think that the way that Kirk is approaching it, like data providers are still going to make a bunch of money, but they're going to make it in a lot more smaller checks than just some big individual ones. And so if you have quality data, that's always going to be something that. Kirk McKeown: People are going to have to pay. Michael Watson: For until that mode erodes. But it's actually on the software licensing side that I see that getting closer and closer to zero. And valuation of expert services is actually going to continue to increase. Christos Koutsoyannis: Domain expertise matters so much about once every six months. Sorry, yeah. Michael Watson: Just to build on that. Kirk McKeown: 18 months ago, Silicon Valley moved from a software business model to a hedge fund business model. And then even from Venmo, they're buying teams from each other for hundreds of millions of dollars, which we've been seeing Millennium and Citadel back the last 15 to 20 years. They're locking people up for things. And at the end of the day, the thing that also sort of matters in that frame, the scale does matter to a point while it's flowing, but it's, it's jungle economics, right? Michael Watson: It is literally eat what you kill supply. Kirk McKeown: Your demand and differentiation and competitive advantage. Michael Watson: Matter more than anything else. Kirk McKeown: And Lyft is how people get paid on Wall street in Silicon Valley doesn't usually get paid on Lyft. So there's a culture change happening right now and they can't see it because they've raised so much capital, but they're also burning a shit ton of money, right? So like it was really interesting is, you know, sort of you see how furious Wall street competing for talent, data and edge. I do think that sort of, you know, sort of what I think is going to get really interesting is you're going to have agentic workflows, collecting data and then managing portfolio of inputs. Because we're also talking about a couple different data. There are data asset classes, right? The stuff that we live in is transaction data, you know, measure of time or money or how companies interact with one another. Most LLM frameworks and what a lot of big guys, you know, where they spend 80% of the time is in text, right? Michael Watson: And text is a different problem, Right. Kirk McKeown: You know, one of the things we believe is that as insight infrastructure gets more, you know, intricate GPU consumption on a per capita basis is actually going to to level more meaningfully because the chips aren't going to have to work hard and work as hard because if you boil a tea kettle, it's different than boiling the ocean. But the other ones have been doing a boiling oceans, right? But if you boil a tea kettle around inventory management systems, supply chain lead time management, all of the different logistics management agents are managing all that stuff. Michael Watson: They're going to need real time data. Kirk McKeown: Coming in and it's going to do probabilistic decisioning just like what happens, you know, in high frequency trading. So I actually think the world actually. Michael Watson: Looks just like Wall Street. Kirk McKeown: I think everybody had fun. Christos Koutsoyannis: It's also amazing how much LLMs and gen AI and AI and whatever comes next enable people like this idea of domain experts, you see that being extremely important. You see data science is failing in health tech or in producing heads funds. The big tech firms never became successful hedge funds. You see it in every sort of cross domain like the value of domain expertise and experts, I agree will grow exponentially. Kirk McKeown: Stefan, what are your thoughts on sort. Christos Koutsoyannis: Of fully programmatic access to data or where the incentive system is going and modular licensing. What are your thoughts? Stefan Jansen: But I wasn't thinking about this issue with domain expertise in order to qu the skills to actually be good in this world, right? Because there's this weird paradox where you know, AI enables you to do all these things, but actually you already have to know a lot of things to make good use of it, right? Like what is what Are the skills, right? You got to figure out what are the right questions to ask, what are. Michael Watson: The problems to solve? Stefan Jansen: What is the kind of context that you need to provide to solve. So you, you have to already understand the problem to a good extent in order to be able to solve it through AI, Right. So how do you learn that? Right. So you have to go through a whole path of learning and experience that together by solving actual problems to acquire the skills, you know, so that ultimately is they call domain expertise. Jason Strimpel: Right? Stefan Jansen: We've been solving problems in some domain for a decade or two. So now you know how these problems and what they look like and what their solution is if you try. And some of these conversations that I have with like young, young graduates, you know, they really struggle kind of trying to bridge, you know, between being really fast and using, you know, ChatGPT and friends to kind of produce things in their work, but then also feeling that they're really staying done. So how do you kind of go from one side where you're learning to decide where you're actually capable and competent to use it? I think this is really hard for young folks and it clearly puts a huge premium if you already own on the other side, assuming you understand how you interact with the model and you understand like context and all that. But so yeah, I think that is really an interesting question. Really interesting to see how that involves. Jason Strimpel: Business guy on that. Christos Koutsoyannis: Yeah. Jason Strimpel: So I have two small kids and I was up late drinking a lot of alcohol with a friend of mine and we got just talk to old guy response. Funny you put those two things in the same fully responsible, right? And we got to talking like his daughter wants to be a pilot and she asked him, dad, will a pilot still be a thing? When I'm old enough to be pilot. And this guy is very like cerebral, kind of hippie, dippy, sits in the river and all this. And he like really got impacted by this. So I call this like sex, drugs and AI. It's like follow up and follow me. Christos Koutsoyannis: This is going well. Jason Strimpel: You want to be controversial, right? No, it's the things you have to talk to your kids about. And we were talking about this earlier. You got to talk to your kids about sex, you got to talk to your kids about drugs, and now you got to talk to your kids about AI. It's like, it's another angle. I was, I was mentoring a guy who's a junior in undergrad about to graduate with this computer science degree. Go on that. Cuz he's like, I just did my Entire thesis on quad in an afternoon. Like I was studying for this thing all, all quarter, whatever. And I went online and I did this thing and it just wrote all the book. Why am I going to be a software developer when Claude can write off the software? Paralegals, radiologists, there's these huge swaps of careers now that young people are not going to engage in. Or you get this kind of false sensory type of question or Claude, you get your answer like I'm smart when in fact you're not going through that pain of learning something the hard way. Like how many of you are programmers? Anybody self taught programmer, you know how bloody hard that is a self teacher. Like how many hours of your life have you spent sitting there staring at bugs and now like you never have any bugs, right? Because you just recognize them so quickly. If you learn to code with Claude, are you ever going to have that experience? And I'm just talking about coding, like imagine trading, finance, some of the most complicated professions on earth that you've got to go and try to figure out and compete with people who have actually gone and lost money trading. Christos Koutsoyannis: You’re all experts—but what does that mean for graduates? I see this with my students at Columbia: what are everyone’s thoughts on the skills the next generation needs? Michael Watson: Skills and. Christos Koutsoyannis: The next generation stuff? Stefan Jansen: I think it's a difficult transitioning area, right? I mean there always these rapid changes. So 10, 15 years ago you got like phones, you got social media, you know, then there's like period where people really struggle to find a new way of dealing with this. I actually think that the kids are also like a 5, 6 year old. I think they are growing up with this enough a whole different way. Like they will find maybe much more natural way to leverage the skills and the tools where they are. They still will maybe find their own way to adopt to kind of solve problems that go through path that has to go. Maybe develop entirely different skills, right? Because they know what they have at their disposal. Part of this hard is the folks that either get hit by that at a certain time in their Life, right? They're 15, they're 20, early career and they're now struggling with the tools that companies are not set up yet to train people. They don't really know what to give to them. But folks still have to learn something, right? So I think there is a lot of opportunity really on the one hand, right, you get to do much more than before, but can you really use Claude code If you are not a good programmer and actually produce stuff that is really sophisticated of value, I'm not sure I do it all the time. I lose eight hours a day. And I'm wondering, and if you don't know what the big picture looks like at the end and the way it should look like, not just from a book or from experience, I think you can get heavily lost. And so I wonder if these folks are actually able. So that's what I mean with the paradox, right? Like it's supposed to really help the folks that don't have the experience to get it at the tr. But I think unless you have already been to that, you're not going to contribute. Michael Watson: If I could be a little bit more optimistic for people that are a little bit younger. I think at the same time being good with AI is a skill and it is a muscle that you have to flex. I tell our new hires that I don't think you really understand how to use AI until you've spent 300 hours with flawed code and realizing it's not just a tool for writing code, it's an environment that an AI can have full access to your computer and can interact with tools in the same way that you can when you have a Linux or a Bash or a Linux or a Windows PowerShell. And that takes time to actually truly internalize what that means. And that time is requiring somebody to hands that computer and using that. And so there's a huge gap right now in an arbitrage opportunity for people that are young, with the people that haven't had that 300 hour threshold to actually be truly aware of what's possible in the people that have it. And so even though you don't have domain expertise, you have time and you have an arbitrage opportunity that is probably better than any other skill based arbitrage opportunity that's ever existed. And so it's actually more advantageous to be like going into the workforce right now. Yes, you have to approach a different way problem than what you thought you were originally going to be exposed to in terms of career finding. But the actually gap in opportunity that exists between like what's needed and what the workforce can actually provide from understanding how to use AI effectively is huge. And so it really takes six months of hands on keyboard upskilling with a clock code or with a Codex, working with chat GPT, working with open AI, working with some of these frameworks, sit on top of it and realize that all of these MCP servers that right now are wowing everyone you can literally learn how to write in an afternoon. Jason Strimpel: To understand my limitations. Michael Watson: Yeah, exactly. And I think there's also an element of false sense of expertise that exists by people that are in their mid-40s or mid-30s that thinks that yes, I've been doing all this on a 5, 3 or I actually am an expert. Because if you are an expert, step on. You would be able to click clearly articulate without anything involving AI, a workflow process within the enterprise of how data comes from one place. People do certain tasks, have outputs of that task is handed off to somebody else and then they take and produce some work process. And that's how we handle reconciliations, that's how we handle our accounting workflow, that's how we handle our portfolio construction and risk processes. Gee, how many people can clearly articulate that Right now? Pretty much no one. And so this false sense of expertise that like people that are a little bit older feel like, oh, how are these younger kids actually going to be able to understand the nuances of what's going on? I think that because they didn't have to grow up in an environment like we never evolved with the actual need of clear articulation of a process that even though we can do specific tasks, we haven't been trained ourselves of how to clearly articulate the work that we do such that AI can effectively use it. So yes, we as experts also have some sort of domain knowledge that is special, but we've never actually learned how to articulate that in a way such that AI can take it and run with it. So that's something that if you're younger, you can learn that now and meet us when we're older. And you get this in this experience, that's something that we have to learn now, now that we have this experience. The importance of clear articulation of workflows and processes, of analysis inputs and outputs that don't require writing code but require an understanding of a system. And that's something that I see that very rarely amongst people that need to be experts. Christos Koutsoyannis: I strongly agree with the optimism as well. My general view is that the same way we move from hunters, gatherers to agriculture, to manufacturing to services, most of the my career, you would see most worlds spending most of their time wasted doing manual stuff or copy pasting or eating with something that can be automated. And similarly with any domain expertise, the fact that we can now focus on value added stuff, I think speaks volume. Didier Rodrigues Lopes: Yeah, I'm going to double down on that. I think that AI is like the. Jason Strimpel: Best thing that has happened to like. Didier Rodrigues Lopes: Human race honestly because it's just going to push humans to be better overall everything. And so like if you are average on what you do, like if AI can come for it, that's probably a good thing. You need to raise the bar. You need to be better. Like you know, you need to double down. I mean, you know when like Excel was invented like people then had to start like learning how to use like Excel and DBA and whatnot in order to get better at their job. So like if your job or like what is your highest like skill set today can be raised with AI like is that like a bad thing? Like do you think to push like harder onto a specific vertical to find like how can I use like AI to get better? And so I think that this is going to be just a function of course that has been like faster than anything else in the past. But ultimately I think it's going to push the human race life forward faster. Christos Koutsoyannis: So on that note. Stefan Jansen: So obviously if. Jason Strimpel: You'Re well compared, AI is indeed the. Stefan Jansen: Best thing that can happen. Right. So where the folks are, they're probably pretty well compared. You know when you look around though, there's a lot of people that are prepared and then there's the whole rest of those. So there's a large swath of the population I think that are really having not to time. They will eventually like over you know. Michael Watson: Several years I think people will all. Stefan Jansen: He'S into this and then it's going to clearly technology is great. I'm just saying that's true. Struck by like talking to young folks that come out of good schools, they go into J.P. morgan. They are basically really struggling with how do they position themselves as somebody who is just chatgpting everything as opposed to really trying to learn something profoundly by being slower, you know. So I was just like that was something I would not have expected. So you know, so that was interesting. Christos Koutsoyannis: This is fantastic. This is so the alignment, what was it to us? Which is we're all almost running out of time. I want to leave the sort of space for a few questions. Maybe we can do a very quick like lightning round on, on sort of words of caution. Maybe it's the transition but maybe it's something else. Maybe it's something about reliability or expectations or anything else what keeps you up at night. Like if anyone wants to let's sort of quick lightning round of caution. Kirk McKeown: 30 seconds. I, I, I spoke it a bit an off site for somebody who like a week ago and I said very clearly if you watch the Emmys last Sunday or two Sundays ago, there was a Google Gemini commercial and it was about AI, and it was a guy talking to a plant. And he asked Gemini why his plant was mad at him. And I looked to my wife and I said, if that's the use case for AI right now, that's why Carbonar goes out, goes out of business. We're, we're like, we're talking about the future of humankind. And I think we're still in such the early innings of this journey and I think we just got to be patient and strap in because I think it's going to be a long transition. Christos Koutsoyannis: I guess, with your. Stefan Jansen: Yeah, I was just saying, like this whole elastic band, right, People are running ahead dramatically, you know, like the pace of change. I don't think I've ever seen anything like this over the last two, three years came out. It's insane, right? But the folks that are at the, at the edge of the experience, right, you use code yourself eight hours a day. It's insane. But you're like in the 0.1% of users, probably like in the bigger scheme of things, right? So you got to realize that all the time there are a lot of people that are not behind. So when we in practice use Gen, we build a lot of objective solutions, all that stuff in companies, we always try to get generative AI out of the solutions because of all the, and all the lack of reliability and storage, the whole thing, right? There's when you can do it without it, and often you can, right? You can use it in the design. Michael Watson: Of the solution, you can use it. Stefan Jansen: To have the subject matter expert design exactly what has to happen, great. But as soon as we have that, often all the way to Algeria. Didier Rodrigues Lopes: Falling. Michael Watson: Behind, I think that anyone that's trying to compete in the space right now has a natural sense of FOMO or somebody else is solving something that they're not or that they're falling behind. And I think being comfortable with that, like knowing that you're out there, at least for me, knowing that I'm out there and I'm working hard and I'm doing everything that I can to take advantage of this opportunity in front of me and knowing that, like whatever that finish line is, as long as I'm trying my darndest right now to take advantage of this opportunity right now, that it's going to work out in the end, and being comfortable with that fear and knowing that fear is driving me to be better. So for anyone out there that feels the same way, like we're all in it to win it and like, I'm sure everyone here is going to be wildly successful in 10 years. There's a reason why it's 9 o' clock at night and you're in an AI fintech conference family. So if anything else that's a signal that you guys are, we're all going to be wildly successful if we have drinks in 20 years from now. If this is the type of investment that we make in our free time at 9:00 clock at night on a Thursday. Jason Strimpel: Second on that I got kind of two things. I, I like to, I tend to take things to the limit to see what the tangent looks like. So, you know, imagine a case where every human is hyperproductive. The US economy can grow at 5%, but we can do that at 20%. 7. What is that? So that's the kind of thing that I kind of think about, but more acutely like. Michael Watson: That was good, man. Didier Rodrigues Lopes: So I'm like super bullish every AI, but there's one thing that I cannot wrap my head around and is the, the AI pendant. So I don't think, I don't see a world where I think they are inevitable, they're gonna happen. But man, I enjoy not having them around right now. Like, I enjoy being able to, you know, be with like my wife or friends having a beer and just having a conversation when we're really open and there's not like these constant like device like recording and like you know, making suggestions. So I think that is like such a good time to be alive because you are at the perfect early stage to be part of it. But at the same time you can still have that like human touch. And I hope that doesn't go away. That's the only concern I have. Jason Strimpel: Yeah, I'm taking my other 20 seconds. I'm taking my other 20 seconds. Michael Watson: I like that, like, appreciate it. Now is the time before the cell phones, like I went to high school and I didn't have a cell phone. It wasn't like we were all on social media all the time. And you look back at pictures from like the 2000s, you're like, oh, we were so innocent back then. It was so special. To your point, it's probably the exact same thing right now. Appreciate it or appreciate. Jason Strimpel: So it's like the opportunity and cost of not working on the right thing. So the kind of, for all the startup people, the kind of Y combinator rule of thumb is like 10 years, billion dollar valuation. That's kind of the rock heuristic. And, you know, there's a smart person used to ask his students to say, what's the most important thing that you can do right now? Why aren't you working on that thing? So it's my job for myself personally, to figure out what that most important thing is, because I can get obsessed about whatever I want to, but I just have the points of the thing I want to get obsessed about. So I think we're at this crazy inflection point now where we got to make sure that we're focusing on the right thing and getting obsessed about the right thing so that in 10 years, we don't look back and miss the Internet again or miss the Facebook, miss. Christos Koutsoyannis: The AI questions from the audience, and will save time for any questions for. Kirk McKeown: Any. Stefan Jansen: Hand at the back. Jason Strimpel: Really learned a lot. Quick question: how do you see the impact of LLMs and other tools on legacy stacks, helping firms address technical debt? Remember when COVID first hit, they needed all these COBOL programmers to come back and help with the load. Your perspective? Michael Watson: I can touch on that, because that's the world that I live in every day. So we'll go into a fund that has 20, 30 years of legacy infrastructure. They'll have like two different fund admins. They'll have maybe an accounting system that no one understands, a pricing engine, seven different SQL Server instances that literally thousands of of tables and thousands of stored procedures all undocumented. And there's dozens of firms just like this being able, like just being able to use Claude code and be able to describe like, hey, you're now going to scan all tables in sys information schema and pull all the columns and look at the top five values of each table and store that in a knowledge graph with a node as a table and then edges to columns, and then columns themselves are nodes. And then let's do the same thing with all the stored procedures that are defined in the databases. And then we'll use an LLM to be able to say, all right, what are all the tables that this reads from and what are all the tables that it writes to? And also identify the columns. Make those as edges between the. No, that's a stored procedure and an edge between the table reached you writes from. Now let's scan all the upstream systems and what they exist and what those APIs are. So we start connecting the dots between where the source of that data is. Bloomberg, ems, Wall Street Office vpn, Geneva, R, CESIUM there's all of these different vendors that exist in the space. You can scan all of those with a very simple prompt with claude code, identify those as upstream sources of these systems. Now find out where the actual reports are. A lot of those are going to be in email and you pull those from Outlook. They might have a tableau, they might have a Power bi, they might have an SSRS instance for reporting. You can do the same thing and scan all of those. And so the ability to build an entire knowledge graph, the flow of information with using AI effectively on a legacy stack, it's easier than it's ever been before. And you can do all of that with open source tooling and with being able to use off the shelf state of the art models and probably like weeks worth of time. And so like once you have that, that knowledge graph becomes the single best source of truth of what's actually going on within an organization. In a legacy system. You can then start integrating that by parsing down what documentation have. Take a Fireflies or an Otter, connect it to all of your meetings. When you're talking to a subject matter expertise, make sure that transcript gets parsed. And when you're asking them about a specific table or a process, you start indexing that same knowledge that the, the engineer in Bangalore or the engineer that's been working on the system for the last two years can actually help fill in the gaps in that knowledge graph and you can start building up a single source of truth that is better than any other person within the organization that understands that legacy system faster and more effective than we've ever been able to do before. We do that every day. So if anyone has that problem, that's like half of what engineer is. Yeah, totally. You keep scanning it, right? Like you have those same processes that you're indexing. Scanning it. Kirk McKeown: Yeah. Christos Koutsoyannis: Questions? Thank you. Kirk McKeown: Yep. Christos Koutsoyannis: This is for Jason, maybe everybody else. Kirk McKeown: What would you tell that junior study today? Jason Strimpel: I mean, paradoxically, code. I mean, I think being able to write code is one of the most leverageable skills that you can have anywhere in your technical expertise. Now, do you want to make a career as a software engineer? Didier Rodrigues Lopes: I don't know. Jason Strimpel: I think, I think Michael was saying like if you're in your mid or early 20s, honestly, I tell people you take as much risk as you can. You literally have nothing to produce, you have no responsibilities. I mean obviously there's some, but like you don't have a kit or a mortgage. Go take as much risk as you can. Learn as much about the world as you can start a business, there's never been an easier time to start a business business like the first billion dollar one person business is coming. Like scale AI was about 15 engineers or something like that. Didier Rodrigues Lopes: $15 billion coming. Jason Strimpel: So that's my best answer. Now I'm like 80 comfortable with that. I think there's some refinement that needs to happen, but that's not. Didier Rodrigues Lopes: Yes, I have a question for Kurt. So based on what Michael and I were talking earlier, so on the demo from carbonarch, that was awesome. So we it showed that like you could get credit card data from like Walmart. Right. And so Michael asked a really good question around like, okay, and what if you ask about like, you know, compare it with like Amazon, how do you price it? Right. And so I actually was on the bank there and I guess they. And it was just one single FTP tool call and it just gave the answer. So basically that means that you did the intelligence behind in order to get both answers. And so my question to you is like, how do you think about the pricing of that? Because you did extra computing for getting that intelligence. So like how do you put a value on that? Because that's super valuable for the person. Kirk McKeown: Yeah, so it's a good question because. Michael Watson: We'Re still hacking through that. Kirk McKeown: To be completely frank, we're in beta now. One of the reasons we went into beta was A to get feedback and B, to figure out pricing. The way we're thinking about it, because of the way it's the context layer is it actually becomes more of an ARR tool rather than a consumption based tool where you basically have different levels and then query structure and then the way you pay your providers, which is actually the bigger problem to solve. It's not so much the computer context, it's paying people back. We're probably going to run a TikTok framework in a pool of capital and have an attribution layer back to our suppliers because we have rips with all of our suppliers. Right. Basically a T cost framework. And the reason it's fast and clean is we've modularized the decision matrix in the stack. Right. So you know, sort of everything's Legos in our stack and we're actually building out more Lego frameworks because if you think about an entity plus an event plus it creates an expected and an actual impact. Right. And sort of event structure, you know, you know, if you run, if you work at a hedge fund, you trade at SAC, there's 16 to 20 alpha events per year per ticker. Three day weekends, you know, management decisions, earnings calls. Over time, as more data comes into the market, there'll be thousands of events and it'll look like an event management system for infra and so. And then you'll just be living in those T's. As the T's get smaller, the impacts will get much easier to predict and everything becomes binary. Right. But fundamentally we're looking at sort of an individual license, you know, an academic license. It'll be based on user counts and then query counts and it'll be at different levels and tiers and they'll go. Michael Watson: Cast people out, you won't go their. Kirk McKeown: Balances on a month to month basis and we'll see how that works. But like it's kind of wild west. Michael Watson: Yeah, I think that's such a, the pricing model of that is such. It's like the pricing model of agents. Like it wasn't an MCP server, it was an agent that had intelligence in it that was doing arbitrary decision making and then sending back the results and the pricing model for that. And what he come up with like that is probably a massive company in and of itself. It was really cool. Kirk McKeown: That's where I give you kudos. The way we think about costing and this is cool. So like we're metering everything in the back and there's actually no metering construction. We built our own power meter. Right. Like we're metering on a per megabyte basis the cost structure of the entire stack. Because you have to get to marginal cost because data monetized, it's going from an oligopoly to a commodity. We think it's a branded commodity because people will pay for paychecks data rather than cusco data. There'll be a discrepancy based on size, scope and shape. You know, but fundamentally we are seeking everything down marginal cost because that is a lever point commodity industry. And then you got to play and then we spread the compute across the marginal cost because we get the supply once and then we push it out to, to you know, right now 70 enterprise customers. Right. Once that gets to 2 million agents hitting the staff for two years, you know, buying a couple dollars worth of data, you know, we'll continue to probably push down right cost because nobody else will be able to compete. That's the best new credit card say. Christos Koutsoyannis: That they can charge global vendors businesswide. So I want to do one last thing which is the average asset manager out there is so far behind. But they, all of you on this panel have an amazing opportunity to revolutionize how investment is done today. So thank you all for being here today and showing up, all three of you.