Now that you understand what Bedrock AgentCore is and how it fits into the AWS ecosystem, it's time to get hands-on with the development process. You've already mastered building intelligent agents with Strands
and integrating them with Bedrock
models, knowledge bases, and external tools. In this lesson, we'll dive into the practical aspects of developing AgentCore
applications locally using the Bedrock AgentCore SDK.
We'll transform a standard Strands
agent into a cloud-ready AgentCore
application using the BedrockAgentCoreApp
runtime wrapper from the SDK. This local development approach allows us to test our agents thoroughly, debug any issues, and ensure everything works perfectly before moving to production.
The Bedrock AgentCore SDK is a Python library provided by AWS that helps you deploy AI agents to Amazon Bedrock AgentCore's managed infrastructure. Think of it as a bridge that transforms your existing agents—whether built with Strands, LangGraph, or any other framework—into cloud-ready applications without requiring you to manage servers or infrastructure.
The SDK's main component is the BedrockAgentCoreApp
class, which acts as a wrapper around your agent logic. This wrapper handles all the HTTP server setup, request routing, and cloud deployment mechanics automatically, so you can focus on your agent's intelligence rather than infrastructure concerns.
You can find the Bedrock AgentCore SDK in the official AWS GitHub repository, along with comprehensive documentation and examples to help you get started with your agent deployments.
The AgentCore Runtime is AWS's managed serverless environment specifically designed for running AI agents in production. It provides automatic scaling, session isolation, and seamless integration with AWS services, eliminating the need to manage servers or infrastructure yourself.
The key innovation here is the runtime wrapper pattern: instead of running your Strands agent directly, you wrap it with BedrockAgentCoreApp
from the SDK. This wrapper acts as a translation layer that:
- Converts your function-based agent into an HTTP web service
- Handles incoming requests from the cloud environment
- Routes those requests through your agent logic
- Formats responses in the structure expected by AgentCore
- Manages all the server infrastructure automatically
This approach preserves your familiar Strands agent development experience while adding enterprise-grade capabilities like automatic scaling, secure session management, and cloud-native deployment. Your agent logic remains unchanged—you simply gain the ability to serve it at scale in AWS's managed environment.
Let's see how this works in practice by setting up our Strands agent and wrapping it with the AgentCore runtime.
Let's start by setting up the Strands agent we'll be wrapping for AgentCore deployment. This should be familiar from our previous work with Strands agents:
We're configuring our Bedrock model with guardrail integration using environment variables, which allows us to use different guardrail configurations across development and production environments. The agent includes both calculator and retrieve tools, giving it computational and information retrieval capabilities for handling AWS technical questions.
Now we'll wrap our Strands agent with the AgentCore runtime, transforming it into a cloud-ready application:
The BedrockAgentCoreApp
acts as a bridge between the AWS cloud environment and our Strands agent. It handles HTTP server creation, request parsing, and response formatting automatically, letting our agent focus on processing user requests intelligently.
The entrypoint function is the heart of your AgentCore application—it's the designated function that processes all incoming requests from users and returns your agent's responses. When someone interacts with your deployed agent, their request flows through this function.
The @app.entrypoint
decorator is what makes this function special—it tells the AgentCore runtime that this is the main handler for incoming requests. When a user sends a message to your agent, the runtime automatically routes it to this function.
The function follows a simple three-step pattern: extract the user's input from the incoming payload, process it through your Strands agent using the familiar agent(user_prompt)
syntax, and return the response in a JSON-serializable dictionary format. Note that the payload structure is entirely up to you—while we're using "prompt" as the key in this example, you could structure your payload with any keys that make sense for your application, such as {"message": "...", "context": "...", "user_id": "..."}
. Similarly, your return dictionary can contain any structure you need, not just a "result" key.
This standardized interface keeps your agent logic clean while ensuring seamless integration with the AgentCore runtime environment.
For local development and testing, we need to start an HTTP server. The BedrockAgentCoreApp
provides a simple way to run our agent locally before cloud deployment.
The app.run()
method starts a local development server on port 8080 using Uvicorn, a high-performance ASGI server. Under the hood, BedrockAgentCoreApp
uses Uvicorn to transform your agent into an HTTP web service, allowing you to test the same request/response patterns that will be used in the cloud environment. The conditional if __name__ == "__main__"
ensures this server only starts when we run the file directly, not when it's imported as a module.
To start your agent locally, simply run:
You should see output similar to this indicating the Uvicorn server has started successfully:
This output confirms that your AgentCore application is running locally and ready to receive HTTP requests on port 8080.
Once your server is running, you can test it using HTTP requests. Here's how to send a test prompt to your local agent using curl
:
This curl
command sends a POST request to your local server's /invocations
endpoint with a JSON payload containing our prompt. The Uvicorn server processes this through our entrypoint function, runs it through our Strands agent, and returns the formatted response. You should see a JSON response containing the agent's answer about AWS Bedrock:
Notice how your Strands agent is working exactly as expected—it used the retrieve
tool to gather information about AWS Bedrock and provided a comprehensive, well-structured response. The AgentCore runtime wrapper seamlessly handles the HTTP layer while preserving all your agent's intelligent capabilities.
Congratulations! You've successfully transformed a Strands
agent into a cloud-ready AgentCore
application. We've covered the essential components: model configuration with guardrails, agent setup with tools, the AgentCore
runtime wrapper, entrypoint function implementation, and local testing capabilities.
This local development approach gives us confidence in our agent's behavior before cloud deployment. We can test different prompts, verify tool interactions, and ensure our response formatting meets AgentCore
requirements. In the upcoming practice section, you'll get hands-on experience implementing these concepts yourself, building your own local AgentCore
applications and mastering the development workflow.
