MCP 101 - Turning My AI Into a Real-World Action Machine - Part 1



This content originally appeared on DEV Community and was authored by NARESH

Banner

We live in a world where “impossible” is often just code for “hasn’t been reimagined yet.” The ideas that change industries rarely come from following the manual they’re born in the messy space between curiosity and courage.

In recent days, I attended a Mega Workshop by NxtWave, and that’s where I stumbled into something that instantly became part of my journey: MCP (Model Context Protocol). I learned how it works, why it’s powerful, and most importantly how it can turn an AI from just “talking smart” to actually doing things in the real world.

If you already know about MCP, you’ll nod in recognition. If not, you’re in for something that could be a game-changer for your future but yes, you’ll still need to nail the basics first.

Because MCP is such a deep, fascinating rabbit hole, I’m splitting this into two parts:

  • Part 1 (this post): A deep dive into MCP how it works, why LLMs need it, and its core architecture.
  • Part 2: The fun stuff projects I built, activities I tried, and my personal insights after putting MCP to work.

So, let’s start from the top. What exactly is MCP, and why should you care? Well, before we dive in, we first need to understand a bit about LLMs and function calling don’t worry, it’ll be a piece of cake.

Introduction to Large Language Models (LLMs)

Large Language Models (LLMs) are advanced AI systems trained on massive datasets to understand and generate human-like text. Think GPT, LLaMA, Grok, and many others.

Limitations of LLMs

  • Hallucinations — Sometimes make up facts with total confidence.
  • Lack of Real Understanding — They sound smart but don’t actually “know” things.
  • Data Bias — Reflect biases from their training data.
  • Resource Intensive — Heavy on compute and energy.
  • No Real-Time Awareness — Without special integrations, they can’t fetch or verify live information.

And here’s the big one: LLMs can only answer questions based on the data they were trained on.

So if you ask your AI, “What’s the weather in Chennai right now?”, it’ll either guess or admit it can’t answer because it has no live connection to the real world.

This is exactly where function calling changes the game. Let’s see how.

🚀 Function Calling: The AI Butler With a Phonebook

Imagine you’re a billionaire (congrats �) with a butler named AI. You don’t tell the butler how to do something you just say:

“Book me a flight to Paris.”

The butler then opens his phonebook of skills (a.k.a. functions) and rings the right contact your travel agent without bugging you for every detail.

That’s function calling in AI:

  1. You describe what you need in plain language.
  2. The AI decides which “function” to use.
  3. The function executes, and voilà you get the result without touching the messy backend stuff.

💡 Why it’s powerful:

It turns AI from a chat partner into an action hero. It doesn’t just talk; it does.

Example:

{
  "name": "getWeather",
  "parameters": {
    "location": "Chennai",
    "unit": "celsius"
  }
}

You say: “What’s the weather in Chennai?”

AI calls the getWeather function behind the scenes, gets the data, and hands it back like a magician pulling a rabbit out of a hat 🎩🐇.

But here’s the thing… imagine you’ve got a huge toolbox with hundreds of tools (functions) that could trigger at the same time most of which you’ll never actually need. Writing and managing all those separate functions is like carrying a Swiss Army knife the size of a fridge.

And this is where MCP comes in.

It’s the protocol that says: “Relax, you don’t need to hardcode everything. I’ve got a smarter way to connect your AI to the real world.”

Let’s dive into it I’m excited 🤩.

🔍 What is MCP?

Okay, so here’s the deal MCP stands for Model Context Protocol.

Think of it as a universal translator between your AI model (like GPT, Claude, etc.) and the real world.

Instead of your LLM being stuck in its own head, MCP gives it superpowers to:

  • Talk to tools
  • Pull data from APIs
  • Access live information
  • Trigger actions in apps or devices

And the best part? It’s standardized meaning once a tool speaks MCP, any AI that understands MCP can use it.

It’s like we went from old-school landline phones (every tool needing its own custom wiring) to smartphones with app stores now you just “install” a new MCP server and boom, your AI knows how to use it.

So if function calling was giving AI a single phone to call one friend,

MCP is like giving it the entire contacts list, speed dial, and group chats all ready to go. 📞🚀

🛠 MCP Architecture — The Big Picture

MCP is beautifully modular, so you can plug and play without breaking everything else. Here’s how it works:

MCP Architecture

1. LLM / MCP Client

This is your AI assistant GPT, Claude, LLaMA, Grok, whoever’s wearing the cape. The client is where your prompts start and where the final answer lands.

2. MCP Transport Layer

The communication highway. Uses WebSockets or stdio to keep a real-time, two-way chat open between the AI and the tools. No “send → wait → reply” nonsense it’s live.

3. MCP Servers

The magic middlemen. Each server connects to a tool or service:

  • GitHub Server → fetches commits.
  • Weather Server → checks Chennai weather in seconds.
  • Calendar Server → manages your events.

4. Tools & Data Sources

The end destination APIs, databases, services, or even hardware. The servers talk to these, fetch data, and pass it back to the AI without you ever touching the complexity.

💡 Why it’s genius:

  • Plug & Play → Add/remove servers anytime.
  • Secure by Design → Servers define exactly what’s allowed.
  • Universal → Any MCP-compatible AI can use them instantly.

📱 MCP Like Installing Apps for Your AI

Think of MCP servers as apps and your AI as the smartphone.

When you want new powers, you don’t rewrite the phone’s operating system you just install an app.

  • Want weather updates? Install a Weather MCP Server.
  • Need to pull GitHub commits? Install a GitHub MCP Server.
  • Want to blast messages to Slack? Install a Slack MCP Server.

No messy rewiring. No “reinvent the wheel” coding marathons. Just plug, play, and power up your AI.

🔐 Capabilities & Permissions — The AI Safety Net

MCP Capabilities

Every MCP server clearly defines:

  • What it can do (capabilities)
  • What inputs it accepts (parameters)
  • What it’s allowed to touch (permissions)

So even if your AI goes “Hmm, I wonder if I can delete all GitHub repos?”, the MCP server will just say “Nope, not in your permission list, buddy.” 🚫

This means you can confidently connect powerful tools without worrying about unintended chaos.

Think of it like giving your AI a toolbox where each tool only works for its intended job the hammer won’t suddenly start trying to cut wood.

🌍 MCP in the Real World — Some Killer Use Cases

1. Developer’s Dream Setup

  • AI uses the GitHub Server to fetch commits → runs them through a Code Analysis Server → generates a report in Google Docs Server.
  • Zero manual copy-paste.

2. AI-powered Personal Assistant

  • Reads today’s calendar events from Google Calendar MCP Server.
  • Checks travel times via Maps MCP Server.
  • Messages attendees on Slack MCP Server if you’re running late.

3. Research Assistant Mode

  • Queries a Database MCP Server for fresh stats.
  • Cross-checks via Web Search MCP Server.
  • Summarizes findings and emails them using Email MCP Server.

4. Smart Home Brain

  • AI asks IoT MCP Server to dim the lights, set AC to 22°C, and lock the doors all in one command.

🏁 Wrapping Up Part 1

MCP isn’t just a tech buzzword it’s the missing link between static LLMs and dynamic, action-oriented AI assistants.

It standardizes the way AIs talk to tools, keeps them safe, and makes adding new capabilities as easy as installing an app.

In Part 2, I’ll show you what I built with MCP during the NxtWave MegaWorkshop from idea to execution, the fun wins, the roadblocks, and the “aha!” moments that made me realize MCP isn’t just the future… it’s here now. 🚀

🔗 Connect with Me

📖 Blog by Naresh B. A.

👨‍💻 Aspiring Full Stack Developer | Passionate about Machine Learning and AI Innovation

🌐 Portfolio: [Naresh B A]

📫 Let’s connect on [LinkedIn] | GitHub: [Naresh B A]

💡 Thanks for reading! If you found this helpful, drop a like or share a comment feedback keeps the learning alive.


This content originally appeared on DEV Community and was authored by NARESH