Welcome back to Year 2049, your guide to understanding AI’s potential and problems. Subscribe for free to get my latest explainers, guides, and experiments directly in your inbox
Hey friends,
MCP is one of the key AI trends to look out for this year. If you’ve been hearing about it but haven’t had the time to dive in, this is for you.
– Fawzi
Video
A more detailed explanation
What’s MCP and why is it useful?
In the near future, we might look back at MCP as one of the most significant AI milestones.
MCP, or Model Context Protocol, is a standard for connecting AI assistants to different data sources to give them more context, make them more personal and relevant to specific needs, and improve the output quality. Anthropic introduced MCP in November 2024, and it’s being progressively adopted by software companies this year.
While AI models are improving and becoming more capable, it’s still challenging to connect an LLM to your own data or documents without manually uploading and maintaining the knowledge base. It becomes even more difficult when you want to connect multiple tools together to give your AI assistant access to a variety of data and tools.
It’s like trying to connect different devices to a single system, but each device needs a hyper-specific cable to be compatible. It creates additional complexity, cost, and maintenance for whoever is building it. Any Apple customer who still carries around both lightning and USB-C cables knows the struggle.
MCP is like a universal USB-C port that connects AI models to the applications where we store our data and documents like Google Drive, Sharepoint, Notion, Github, Slack, and more.
How does MCP work?
There are three parts to MCP: the host, clients, and servers.
MCP Host: This is the application layer or interface where you type in your prompts, like Claude or ChatGPT.
MCP Clients: This is the LLM (or LLMs) you want to use in your application, like Claude Sonnet 4 or GPT-4o.
MCP Servers: This represents the different apps and data sources you can plug into to be “served” with the data you need, like Google Drive and Sharepoint. In some cases, servers let you complete actions on the original application (like modifying a document) which makes them agentic. Each server can provide LLMs a combination of the following “objects”:
Resources: Data and documents like files, databases, source code, images, videos, and more.
Prompts: Pre-built prompt templates to execute specific tasks. For example, a Github server may have a pre-built prompt that can help you analyze code. That way, you wouldn’t have to prompt engineer it yourself.
Tools: Actions that you allow the AI assistant to take across your external apps. For example, a Google Drive server might allow you to prompt your assistant to create or modify a document in a specific folder.
For example, let’s say you wanted to build a custom AI assistant to onboard new employees. You would select an LLM and hook it up to an interface, which becomes your MCP host and client. Then, you might connect it to different MCP servers like:
File storage systems, like Drive or Box, to find documentation about previous projects (Resources)
IT Helpdesk/Support software, like ServiceNow, to create tickets and log IT issues (Prompts and Tools)
Internal course and training platform, like Workday, to find internal courses and training (Resources)
Who’s adopting MCP?
The major AI model providers have progressively embraced and adopted MCP:
Google announced the MCP Toolbox for Databases at Google Cloud Next in April
Microsoft revealed it’s supporting MCP within Copilot Studio at Microsoft Build in May
OpenAI now lets you plug into MCP servers as of a few weeks ago
The list of MCP servers is growing and we can expect most companies to follow. Some notable ones:
Box
Slack
Github
Elasticsearch
Hubspot
Notion
Zapier
Perplexity
Stripe
The quick adoption of MCP shouldn’t be surprising. The next wave of AI apps will connect different tools together into a centralized knowledge system or agentic system, and nobody wants to be left behind.
The capability overhang
At Microsoft Build in Seattle last month, Microsoft CTO Kevin Scott mentioned that we’re currently witnessing a capability overhang: AI models are getting more capable, but we haven’t tapped into their full potential on the application side yet.
MCP will close the gap between AI progress and AI products. So far, many AI implementations have created additional data silos and faced integration challenges that require additional maintenance.
The potential of MCP is exciting. It opens up the barriers between systems and gives people the ability to build custom and modular AI tools for their needs. It creates a more competitive landscape where you can easily switch between model providers and data sources without being attached to specific tools. Maybe it’ll give each of us the personal “Jarvis” we’ve been waiting for.
Further reading
Introducing the Model Context Protocol (Anthropic)
Share this with someone
If a friend sent this to you, subscribe for free to receive practical insights, case studies, and resources to help you understand and embrace AI in your life and work.
⏮️ What you may have missed
If you’re new here, here’s what else I published recently:
You can also check out the Year 2049 archive to browse all previous case studies, insights, and tutorials.
Hey Fawzi, great post and demo there!
I recently made a post on Next-Gen Agent Protocols and what could be built on top of MCP. Feel free to check it out! https://open.substack.com/pub/kyarmin/p/secure-protocols-for-ai-agents-beyond?utm_source=share&utm_medium=android&r=x5pw3
Great explanation as always, Fawzi! Definitely an article for me to mention to my readers on the next edition of Decision Intelligence News :)