Demo 18:45 - 19:00 (15 min)

Live Demo #1: FactSet MCP and Content Tools

Thursday, March 12, 2026
Paris, France

In this session Mark will demo the FactSet MCP server in action using a Claude enterprise model and tools. The FactSet MCP server provides tools to access core FactSet content services such as fundamentals, estimates, company, people and pricing. More info is here https://developer.factset.com/mcp

Speaker

Mark McGillion

Mark McGillion

SVP, Senior Director Engineering @ FactSet

LinkedIn

Summary

FactSet: MCP Service & AI-Ready Data Infrastructure

Speaker: Mark McGillion, SVP, Senior Director Engineering, FactSet Date: March 12, 2026 Event: Paris — Market Data x AI (Finteda / FactSet)


Speaker Background

Mark has been at FactSet since 2017. Previously at ICE Interactive Data and Goldman Sachs. Engineering background in web development; PhD in speech signal processing with MLP neural networks (25 years ago, before Google's transformer paper in 2017). His team is responsible for FactSet workstation web applications (Vue.js, Angular), web components, custom client integrations, and MCP servers.

AI and Agentic Solutions at FactSet

FactSet has significant investment across AI, agentic solutions, and MCP:

- FactSet MCP Service — Released December 2025, currently 17 tools and growing - 70+ MCP servers and clients across development, staging, and production - 480+ tools across all services (~6 tools per server) - 300+ Git repos managing the infrastructure (mostly Python)

MCP Use Cases at FactSet

1. Content distribution — The FactSet MCP Service (client-facing, currently live) 2. Screening and auditing — Coming soon. Screening enables clients to search over content and documents (including client-contributed documents). Auditing provides transparency, content verification, explanation, and source linking to combat hallucination. 3. Client content distribution — Services for clients who provide portfolios to FactSet (asset management, wealth management) to receive that content back via MCP 4. Product engineering (Mark's team) — Tools across the software development lifecycle: development, testing, architectural blueprinting, entity resolution 5. Operational AI tooling — Cloud cost billing analysis, cost attribution to services 6. Client interaction — MCP tools and Claude skills supporting client meeting preparations

Engineering Challenges

- Understanding how MCP, tools, and skills impact development and client workflows - Token management and cost control - Sharing configurations (Claude skills, plugins) — e.g., developing skills for migrating legacy infrastructure (old Angular, PHP) to newer stacks - Making content AI-ready — Organizing data into a well-described taxonomy, providing structure where possible, and vectorizing everything else

FactSet MCP Server: Architecture

The FactSet MCP Service is an implementation of MCP with streamable HTTP transport, serving as middleware (broker) between agents and FactSet's suite of APIs. It is permissioned on a per-client basis.

Key design principles:

- Content-first approach — One tool per data source (not per function). Data-driven routing is handled server-side, simplifying client integrations. - Dynamic self-describing tools — Descriptions are fetched at runtime and provisioned on a per-client basis. The full tool descriptions reach over 3,000 lines of Markdown. - Version control via service path for seamless tool upgrades without client-side dependency changes - Rich validation and improved performance — Single requests handle complex data needs server-side, reducing client round-trips; simpler tool schema reduces verbosity and eases maintenance - Build vs. buy — The content-first design meets clients halfway, reducing the overhead of integration for those on the borderline between building their own solution and buying one

Live Demo: Claude Enterprise + FactSet MCP

Demo 1: Mag 7 Estimates Analysis

Prompt: Overview of the Mag 7 over the past three years in terms of estimate changes (restricted to Claude's training knowledge + FactSet MCP only, no web search).

How it works:

- Claude already knows the Mag 7 tickers from training data - Suffixes tickers with `-US` per FactSet symbology (described in the 3,000-line MD file) - Calls the FactSet Estimates Consensus Tool for each ticker - Constructs a rich response with estimates consensus data broken down by company, plus a summary of key themes

Demo 2: Discounted Cash Flow Analysis for Walmart

A more sophisticated financial analysis using multiple FactSet MCP tools:

1. First calls the Metrics tool (vector search) — the 3,000-line MD file instructs Claude (or any LLM) to use the Metrics tool first, as it provides the FactSet codes and symbology (including FQL language and grammar) required by the other tools 2. Then calls Estimates and Fundamentals tools with those codes 3. Claude's built-in skills generate a self-contained web application with the full DCF report

Note: This took ~5 minutes to generate — highlighting the current performance challenge. The practical use case is event-driven or scheduled agentic workflows that prepare reports overnight, ready for the analyst at their desk in the morning.

Client Usage

Clients use the MCP service with various models (Claude Sonnet 4.6 being popular but expensive and relatively slow). Some use Claude directly, others build their own agents with their own models — including using the MCP service purely to access and store data in a database. FactSet provides the MCP service; clients choose the model and build their own workflows.


Q&A:

Q: Are clients using Claude or other models? A: Both. Some use Claude, some build their own agents with their own models. The MCP service is model-agnostic — FactSet provides the data layer, clients choose whatever model they want.


Changes from original summary

- Added that the MCP server is permissioned on a per-client basis (from architecture slide section of transcript) - Added simpler tool schema / easier maintenance to the architecture key principles (mentioned explicitly in transcript) - Added build vs. buy point about the content-first design meeting clients halfway (transcript: "Clients who are on the borderline of build versus buy, it's kind of meeting them a bit halfway") - Demo 2: Added that the 3,000-line MD file explicitly instructs the LLM to call the Metrics tool first, and added mention of FQL language and grammar as part of FactSet symbology - Q&A / Client Usage: Added "relatively slow" to description of Sonnet 4.6 (transcript: "It's quite an expensive model and relatively slow") - Q&A / Client Usage: Added the use pattern of clients using MCP purely to access data and store it in a database (from transcript: "They may just be using it to access the data and then storing it in a database")

Full Transcript

Live Demo #1: FactSet MCP and Content Tools Speaker: Mark McGillion, SVP, Senior Director Engineering, FactSet Event: Paris — Market Data x AI (Finteda / FactSet), March 12, 2026


MARK McGILLION: And across FactSet. I'll also talk about or introduce the FactSet MCP service, and then I'll take a brief introduction.

For me, I've been at FactSet since 2017. Before that I was at ICE Interactive Data and Goldman Sachs. I've been an engineer forever and my background is web development. In the far past I even obtained a PhD in speech signal processing with MLP neural networks. That was 25 years ago, before Google came out with the transformer paper in 2017.

My team is responsible for web application development, digital web application development at FactSet. What that means is that we are responsible for the FactSet workstation — some of the applications within there, not all of them, but some of them. Vue.js, Angular-based web apps, rich applications, web components and services that support all of that functionality. Also custom integration — so we are responsible for integrating FactSet solutions into client workflows. And we also create MCP servers for a variety of use cases, which I'll describe briefly.

Okay, so we have a lot going on at FactSet with regards to AI, agentic solutions, and MCP. We're using MCP for many different use cases. We've got the FactSet MCP service, which was released in December of 2025. It's over 17 tools — or 17 tools right now, but increasing. We've got over 70 MCP servers and clients currently within our infrastructure across all stages: development, staging, production. We have over 480 tools across all of those services, so about six tools per server, and over 300 Git repos to manage all of that infrastructure. Mostly Python, but others as well.

And they cover a variety of use cases, from content distribution — so that's the FactSet MCP service specifically, although there are others in the pipeline — screening and auditing. Those are new MCP tools that will be coming very soon. Screening, for example, will be enabling clients to search over content and documents, including client-contributed documents. And auditing — auditing is something extremely important in the world of AI. Auditability of content and of responses to ensure that what you're getting as a response from your LLM is not hallucination. So we are working on tools to provide transparency, content verification, explanation, and source linking.

We also are working on MCP services for client content distribution. So for example, clients who provide their portfolios to FactSet for asset management, wealth management, and other use cases — there will be services that deliver that content back to the clients in the form of an MCP solution.

And product engineering — that's in blue, that's my team — we are really involved with using tools across our software development lifecycle. So across development, testing, architectural blueprinting, entity resolution, and so on. A variety of use cases. We use tools for operational AI tooling, for operational management. So for example, cloud cost billing analysis, attributions of cloud costs to relevant services, and understanding the relationship between those services and the usage on the engineering side. And also client interaction — so for example, we have MCP tools and Claude skills that are supporting client meeting preparations.

But it sounds great, but there's a lot of work involved in leveraging those things in an effective way. So we have challenges, probably like any other organization does. Our challenge is to figure out how MCP and tools and skills are impacting our development workflows. We're looking for efficiency, but we're also looking for stability and performance, and also obviously how that impacts our client workflows.

We're obviously working on best practices within the engineering world — that's ensuring that we don't blow the budget. So token management and cost control, obviously sharing configurations. So Claude skills and other configuration that will be used to provide workflows in our engineering world. As an example, developing skills and plugins for migrating legacy infrastructure or legacy code — so legacy versions of Angular, legacy versions of PHP in our web application world, migrating those to newer versions of those stacks or even to completely different stacks. We're using Claude, for example, in that use case.

One of the biggest challenges, I suppose, across FactSet but also for our clients, is making content AI-ready. That's a very big term, isn't it? Very big phrase. What it really means is organizing our data into a well-organized, well-described hierarchy — so taxonomy of the data, providing structure where structure doesn't exist if you can, and vectorizing everything else. But across the whole of FactSet and across our content, that's a challenge.

Okay, the FactSet MCP server. So as I said, that was released in December 2025. It's an implementation of MCP with streamable HTTP transport, enabling clients to integrate directly over HTTP. It serves as a middleware between agents and FactSet's suite of APIs. So it's a broker, effectively. If you know MCP, you'll know what it is, but it's a broker to that API layer, exposing tools — which is all the content. In this case, at the moment it's only content, but there will be other tools like screening and auditing coming quite soon.

And the tools themselves are dynamically self-describing, so the descriptions are fetched at runtime. So if you're implementing an MCP client, those descriptions are fetched at runtime, and they are provisioned on a per-client basis. The full descriptions of those tools reach over 3,000 lines of Markdown. Actually I can show it to you, but I'll have to switch screens.

This enables — but having a dynamic description of that tooling enables tools to be updated easily and quickly. So the clients don't need to upgrade on their side any library dependencies, for example. And obviously version control is important in that situation. So those tools are versioned using the service path, which gives us an option to upgrade those tools as time moves on.

And the tools are implemented in a content-first approach, which is not necessarily intuitive. As an engineer, you don't think about content as an endpoint like a foo function. You think of functionality as an endpoint. But the design of the FactSet MCP tools is that there's one tool per data source. It's a data-driven routing implementation so that the routing is handled on the FactSet MCP server, therefore simplifying clients and client integrations. Those tools provide rich validation. It also provides improved performance so that client implementations don't need to make several requests to obtain the data that they need — so a single request and then the response is handled on the FactSet side — and easier maintenance, so simpler tool schema which reduces the verbosity and repetitive nature of the tools themselves.

Okay, last slide and then I'll move on to the demo. So the MCP server itself, the FactSet MCP server, has 17 tools at the moment and growing. It's leveraging the content API, leverages robust functionality, it's permissioned on a per-client basis. I've mentioned that it's a different design than an API in terms of its content — content-first rather than functionality. So that enables us to streamline the integration for clients, as I mentioned. Clients who are on the borderline of build versus buy, it's kind of meeting them a bit halfway in that it reduces the overhead for them to integrate that MCP service.

Okay, right. So I'm going to move on to the demo. I can't see that screen on here, but I'm going to have to zoom out a little bit. We'll zoom in a bit. So this is the FactSet MCP service.

Okay, so here's a prompt that I tested earlier, and if I've got a reasonable Wi-Fi connection, that still works. So this is a prompt that Claude helped me to write. I think there's quite a few quant-related folks in the audience. I wanted to create a prompt that was maybe relevant for you. And I'd be happy to take requests — if anyone wants to think of one, I'll give you a minute's warning for that. You can think about a prompt that you want me to execute.

So here we are. I'm looking for an overview of the Mag 7 over the past three years in terms of estimate changes. I'm restricting Claude here to only use its training knowledge and the FactSet MCP server. It has a tendency to override my request sometimes, so I have to be very strict with it. And I'm telling it not to use web search and don't create any artifacts.

When you use Claude Enterprise — this is Claude Enterprise — when you use this, it has some fantastic skills. It can create PDFs, it can create web applications, it can create Excel sheets and Word documents and things, but it's quite slow. So for this demo, I don't want to take that much time.

So let me repeat that request now. Hopefully I've got a Wi-Fi connection. There's no microphone over there. Let me zoom in and see if I can read that.

Okay, so the FactSet MCP server — I'll just, while that's running, I just wanted to describe — I skipped over one thing. I just wanted to show you that the connections are configured. I won't click into it. So I've disabled web search here, but I've also had to tell it not to use web search because it sometimes doesn't listen to me. The FactSet MCP server is connected here, so the only knowledge that it should be using now is its own training knowledge, which is obviously cut off at a given point in time. This is Sonnet 4.6, so relatively recent, but it's not going out to the web. It's only using FactSet MCP for obtaining that response.

Okay, so let me see what it's done. Okay, so over the past three years, we're seeing for the Mag 7 — so there's an endpoint in the MCP, or sorry, a tool in the MCP which is called the FactSet Metrics tool, which enables MCP clients to do lookups of some metrics that are used in FactSet APIs. It's a vector search tool.

However, Claude is pretty clever. If I tell it to look up the Mag 7 from its training data, it already knows who the Mag 7 are and also it already knows their tickers. So it'll use that information and suffix those tickers with any additional exchange-specific extensions. And in FactSet's case, we use `-US` for US-listed exchanges, and it'll apply that into any calls into the FactSet tooling.

So in here you can see — there. So it calls the FactSet Estimates Consensus tool to get the estimates over the past three years for each of the Mag 7. As I said, it's suffixed the `-US` on there because our MCP tools come with a 3,000-line MD file that describes how the FactSet symbology works. Then it constructs the answer from the rich content that's returned.

So if I scroll down there, you can see a lot of response. This is estimates consensus data for each of the Mag 7 over the last three years. I've told it not to create anything pretty, so it's just put it out to the web application here, and it's broken it down by each of those Mag 7 companies. And then it summarizes — it's clever enough to come to the summary of that at the bottom, which is pretty nice looking.

I couldn't tell you — I'm an engineer — I couldn't tell you how accurate all of that is. But it's all FactSet content, so in terms of accuracy it's as accurate as FactSet content.

Okay, so there are — I've lost my mouse. There it is. And then it summarizes the key themes at the bottom there.

Okay, so I have other examples that I thought were worth maybe showing — how flexible the tools are, or the MCP tools. This is a discounted cash flow analysis, which is a pretty sophisticated financial tool for estimating future cash flows for an organization. So I've asked it to do a discounted cash flow analysis for Walmart, but I also told it — let me see.

Alright, this one here, it would take a few minutes to run that. So I don't want you to sit and wait for it, but it's created a self-contained web application. So if I just click on that — this was previously created by Claude, but it would take a good five minutes to do that.

The tools on Claude, or the built-in skills — this is a skill used to create this web application — are pretty sophisticated but also not very performant. And I think that's a challenge for us and a challenge for our customers, to really decide how you want to visualize this content. It's a nice idea to give power users the access to this kind of capability. Obviously waiting five minutes for this kind of report to be created is not what people want to do, probably. But this is where you would integrate this kind of workflow into an agentic workflow. Maybe you would have a schedule or event-driven analysis that would be driven through an agent, so that when you land at your desk at 9:00 in the morning — or 6:00 in the morning — that this report would already be ready.

So yeah, it's a pretty rich report, gives a lot of detail on future cash flows. Let me warm it up again. That's using all FactSet content there.

So I think this one's an interesting example because it's using much more of the FactSet tools, or the MCP server tools, than the first example. And you can see that over here. So when I've asked it to give me the discounted cash flow analysis, the first thing it does is it goes to the Metrics endpoint. In our 3,000-line MD file that describes all of our tools, it instructs Claude — or any other LLM — to pull the Metrics tool first. The Metrics tool is a tool that provides a vector search to obtain codes that are required for the other tools. So there's tools in the MCP server such as the Estimates and Fundamentals tools that require codes. We call them in FactSet — and FactSet clients apply — FactSet symbology and our FQL language and grammar. So if you want to provide those codes to these tools, then the Metrics tool is the one that you would use for that.

Okay, I'm being given the sign I need to get off. I was going to ask for a prompt, but am I too late for that? Yeah, okay, I'm too late for that. Okay, thank you very much.


Q&A

MODERATOR: Thanks a lot, Mark? Yeah? One question — maybe, does anybody have a question for Mark?

AUDIENCE MEMBER (Hanan): How are the clients using the service? Are they using Claude or other models?

MARK McGILLION: Some are using Claude and some are building their own agents and they use their own models. So the models are on the client side. We're just providing the MCP service, so they can use whatever models that they want. It depends — I mean, Claude's obviously very popular. Sonnet 4.6 is a very popular model. It's quite an expensive model and relatively slow. So clients are using it in different ways. They may just be using it to access the data and then storing it in a database. But I'm expecting that all of them are using some kind of model, and our clients are building their own agents as well. So whatever creative things that they want to do, they can do that with the MCP service.

MODERATOR: Okay, thanks so much.