Understanding AWS Agents: Strands, Bedrock Agents, and AgentCore with MCP



This content originally appeared on DEV Community and was authored by Om Shree

AWS offers multiple ways to build AI agents, but it can be confusing to know when to use Strands Agents, Bedrock Agents, or AgentCore. In this guide, we will break down these options in simple terms, show how MCP (Model Context Protocol) fits into each, and provide practical code examples for deploying an MCP server and client.

Why Different Agent Options Exist

There isn’t one single agent framework because developers have different needs. Some need full control and open-source flexibility, others want fully managed services with minimal setup, and some need enterprise-grade infrastructure like memory, observability, and secure execution.

Option 1: Strands Agents with MCP

Strands Agents SDK is open-source and integrates directly with MCP12. It is suited for developers who want to self-host and customize their agent logic.

Example MCP Server with Strands:

from mcp.server.fastmcp import FastMCP

mcp = FastMCP("strands-mcp", stateless_http=True)

@mcp.tool()
def greet(name: str) -> str:
    return f"Hello, {name}!"

if __name__ == "__main__":
    mcp.run()

You can deploy this on AWS Lambda, Fargate, or EC2, depending on your scaling needs.

Example MCP Client Query:

from mcp.client import MCPClient

client = MCPClient("http://localhost:8080")
response = client.call_tool("greet", {"name": "Alice"})
print(response)

What’s happening behind the scenes:

When the client sends a request to the MCP server, the server uses the Strands Agents SDK to map the request to the correct tool function, execute it, and return the response. The server remains stateless unless explicitly designed to maintain state.
Image

Option 2: Bedrock Agents with MCP via AgentCore

Bedrock Agents are built on AgentCore34. They provide a fully managed setup where much of the infrastructure like session management, observability, and scaling is handled by AWS.

To connect Bedrock Agents to MCP, you would deploy the MCP server on an endpoint that the Bedrock Agent can invoke via its tools feature.

Example Bedrock Agent Setup:

  1. Create a Bedrock Agent via AWS Console.
  2. Register an API tool that points to your MCP server endpoint.
  3. Define an orchestration prompt that guides how the agent calls the tool.

While the code to create a Bedrock Agent is mostly configuration in the AWS Console, here is an MCP server example you can host on Fargate:

from mcp.server.fastmcp import FastMCP

mcp = FastMCP("bedrock-mcp", stateless_http=True, port=8080)

@mcp.tool()
def calculate(a: int, b: int) -> int:
    return a + b

if __name__ == "__main__":
    mcp.run(transport="streamable-http")

What’s happening behind the scenes:

The Bedrock Agent invokes the MCP server endpoint when it encounters a tool call in the orchestration prompt. AgentCore manages the execution flow, memory, and observability, ensuring that the tool calls are tracked and monitored.

Image

Option 3: AgentCore Directly with MCP

AgentCore is the underlying framework for Bedrock Agents but can also be used directly3. This is best when you want AWS’s infrastructure benefits but still want to write custom logic.

While AWS hasn’t open-sourced AgentCore, it provides SDKs and APIs to define agents and tools. You would typically implement the agent logic and have it invoke MCP-compliant servers as external tools.

Example Flow:

  1. Implement your MCP server.
  2. Use AgentCore SDK to define a custom agent that calls your MCP server as a tool.
  3. Deploy via AWS infrastructure for scaling.

What’s happening behind the scenes:

When using AgentCore directly, the framework handles infrastructure-level needs like
security, observability, and session management. Your custom agent logic defines when and how to invoke the MCP server, ensuring that tool execution fits into enterprise workflows.

Image

Comparison of Strands Agents, Bedrock Agents, and AgentCore

Feature Strands Agents Bedrock Agents AgentCore
Ownership Fully self-hosted Fully managed by AWS Managed infrastructure, custom logic
Flexibility High (open-source, customizable) Moderate (configurable via console) High (custom agents on AWS infra)
Integration with MCP Direct integration with SDK Via API tool registered in Bedrock Agent Custom integration via SDK
Scalability Depends on deployment (Lambda, Fargate, EC2) Auto-scaled by AWS Auto-scaled by AWS
Observability & Security Manual setup required Provided out-of-the-box Built-in via AWS
Use Case Fit Prototyping, custom workflows Quick enterprise deployment Enterprise with strict requirements
State Management Stateless unless custom built Managed sessions and memory Managed sessions and memory
Code Example Provided Yes MCP Server only (Agent config via console) Design flow described

Real-World Use Cases Ex

  • Strands Agents: Prototyping a chatbot with custom tools you host.
  • Bedrock Agents: An enterprise support bot that uses secure, managed services.
  • AgentCore: A financial data analysis agent that requires audit logs and session memory.

Conclusion

All three options can work with MCP, but the choice depends on your control needs, scalability, and operational preferences. Strands Agents offer the most control, Bedrock Agents offer ease of use, and AgentCore provides enterprise-level features.

References

  1. Introducing Strands Agents, an Open Source AI Agents SDK ↩

  2. What is MCP? AWS Blog ↩

  3. AWS Bedrock AgentCore Developer Guide ↩

  4. AWS Bedrock Agents Overview ↩


This content originally appeared on DEV Community and was authored by Om Shree