Introduction

Welcome back to Deploying Agents to AWS with Bedrock AgentCore! We're now at the third lesson of our journey, and this marks a pivotal moment in our agent development workflow. In our previous lesson, we successfully wrapped our Strands agent with the AgentCore runtime and tested it locally, proving that our agent works perfectly in a controlled environment.

However, local development is just the first step. The real power of AgentCore comes when we deploy our agents to AWS, making them available to users anywhere in the world with enterprise-grade scalability and reliability. In this lesson, we'll master the Bedrock AgentCore Starter Toolkit, AWS's sophisticated deployment solution that transforms our local applications into production-ready cloud services. We'll explore project configuration, cloud deployment, status monitoring, and advanced session management for stateful conversations using the powerful agentcore CLI that this toolkit provides.

Bedrock AgentCore Starter Toolkit

The Bedrock AgentCore Starter Toolkit is AWS's comprehensive solution for deploying and managing AI agents in the cloud. This powerful toolkit provides the agentcore command-line interface that streamlines the entire agent deployment workflow, from local development to production-ready AWS services. You can install the bedrock-agentcore-starter-toolkit package using pip or uv, and find complete installation instructions and documentation at the Bedrock AgentCore Starter Toolkit repository.

Once installed, the agentcore command becomes available in your terminal, serving as your deployment orchestrator — it takes the local AgentCore application we built in our previous lesson and handles all the intricate AWS infrastructure provisioning automatically.

The CLI operates on a declarative deployment model, meaning you describe what you want (your agent configuration and requirements), and the CLI figures out how to make it happen in the AWS cloud. This approach abstracts away the complexity of container orchestration, serverless functions, API gateways, and IAM permissions, allowing you to focus on your agent's intelligence rather than infrastructure management.

The CLI provides five core capabilities that cover the entire deployment lifecycle: project configuration for defining your agent's structure, cloud deployment with automatic infrastructure provisioning, runtime monitoring for health checks and troubleshooting, remote invocation for testing deployed agents, and sophisticated session management for maintaining conversational state across multiple interactions.

Configuring Your Project Structure

Before deploying to AWS, we need to establish our project configuration using the configure command. This command sets up the essential configuration files and deployment parameters that define how your agent will operate in the AWS environment:

The configure command performs several critical setup tasks automatically. It generates a Dockerfile and .dockerignore file for containerizing your agent, ensuring your Python application can run consistently across different environments. Most importantly, it creates a .bedrock_agentcore.yaml configuration file that stores all your agent's runtime settings and deployment parameters.

The --entrypoint parameter specifies the Python file containing your agent's main logic — this should be the file with your @app.entrypoint decorated function from our previous lesson. The --name parameter assigns a unique identifier to your agent within your AWS account, which will be used for resource naming and management across AWS services.

Interactive Configuration Process

The configuration process is interactive and user-friendly, guiding you through each setup step with clear prompts and sensible defaults. You can simply press Enter to accept the recommended default values, making the process quick and straightforward:

During this interactive process, the CLI will request and configure several key components:

  • Execution Role: Sets up the IAM role that your agent will assume during execution in AWS (auto-created by default)
  • ECR Repository: Specifies the Amazon Elastic Container Registry repository where your agent's Docker image will be stored (auto-created by default)
  • Dependency Management: Automatically detects and configures either requirements.txt or pyproject.toml files to ensure all Python packages are included
  • Authorization Configuration: Configures security settings, defaulting to IAM authorization for simplicity
  • AWS Region: Establishes the target AWS region for deployment

The process concludes with a comprehensive summary showing all your configuration choices and the location of the generated configuration files, eliminating the need to manually specify these parameters in subsequent CLI commands.

Generated Configuration File

After completing the interactive configuration process, AgentCore generates a .bedrock_agentcore.yaml file that serves as the persistent configuration blueprint for your agent. This YAML file contains all the deployment settings and parameters that were configured during the setup process:

This configuration file captures all the choices made during the interactive setup, including the agent name, entrypoint file, target platform architecture, and AWS-specific settings. The null values for resources like execution_role and ecr_repository indicate that these will be auto-created during deployment, while the corresponding auto_create: true flags confirm this behavior.

The configuration also establishes important defaults such as the PUBLIC network mode for internet accessibility, HTTP protocol for web-based interactions, and enabled observability for monitoring and debugging. This file becomes the single source of truth for your agent's deployment configuration and can be version-controlled alongside your code for consistent deployments across different environments. All subsequent agentcore commands will reference this configuration file to understand your agent's settings and deployment parameters.

Launching Your Agent to AWS

With our project configured, we can deploy our agent to the AWS cloud using the launch command. The agentcore CLI automatically reads the .bedrock_agentcore.yaml configuration file to understand your agent's settings and deployment parameters, then initiates a comprehensive deployment pipeline that provides detailed progress feedback throughout the entire process:

The --env flags are essential for passing the specific resource identifiers that our agent code requires to connect to the correct Bedrock services. Our agent needs the GUARDRAIL_ID to enforce the safety policies we configured and the KNOWLEDGE_BASE_ID to retrieve information from our document collection. These environment variables ensure that when our agent starts up in the AWS environment, it automatically connects to the appropriate guardrail and knowledge base resources. Importantly, we don't need to pass any AWS credentials as environment variables — AgentCore automatically handles authentication through the IAM execution role it creates, providing secure access to Bedrock services without exposing sensitive credentials in our deployment configuration.

AgentCore Deployment Process

The deployment process begins with resource provisioning, automatically creating or reusing your ECR repository and execution roles based on the settings in your .bedrock_agentcore.yaml file. AgentCore then uploads your source code to S3 and initiates a CodeBuild project that compiles your agent into an ARM64 container optimized for AWS Lambda execution. You'll see real-time progress updates as the build progresses through each phase: queuing, provisioning, source download, pre-build setup, main build execution, post-build cleanup, and completion.

The deployment output shows the recommended CodeBuild mode in action, which eliminates the need for local Docker installations while ensuring production-ready ARM64 containers. The process automatically handles the complex orchestration of AWS services, from ECR repository management to container image building and deployment. Notice how the CLI provides multiple deployment options to accommodate different development workflows, from local testing to cloud-native builds.

Deployment Success and Next Steps

The successful deployment output provides everything you need to interact with your newly deployed agent, including the unique Agent ARN for identification, the ECR URI where your container image is stored, and the CloudWatch log group locations for monitoring and debugging. The deployment typically completes in under two minutes, after which your agent is immediately available for invocation through the AgentCore CLI or direct API calls.

Monitoring Deployment Status

After deployment completes, it's crucial to verify that your agent is running properly and ready to handle requests. The status command reads your .bedrock_agentcore.yaml configuration file to identify which agent to check, then provides comprehensive visibility into both your agent's configuration and its runtime health:

The status output provides two essential views of your deployed agent. First, you'll see the Agent Status section, which displays your agent's core configuration and metadata:

This section confirms that your agent has been successfully registered in AWS with all the correct configuration parameters. The timestamps show when your agent was initially created and last updated, while the configuration details verify that the execution role and ECR repository were properly provisioned during deployment.

Invoking Your Cloud-Deployed Agent

Once your agent is successfully deployed and showing a READY status, you can interact with it remotely using the invoke command. The CLI references your .bedrock_agentcore.yaml configuration file to determine which agent to invoke and retrieve any existing session information. This allows you to test your agent's functionality and verify that all components are working correctly in the cloud environment:

When you invoke your agent without specifying a session ID, AgentCore will check your .bedrock_agentcore.yaml configuration file for an existing default session. If one exists, it will use that session to maintain conversation continuity. If no default session is found, AgentCore will automatically create a new session and register it in your configuration file for future use.

The CLI provides detailed output showing both the request payload and the complete response from your deployed agent:

The response demonstrates successful deployment with an HTTP 200 status code and shows that your agent automatically used its retrieve tool to access the knowledge base. The session ID indicates that AgentCore created a conversational session for maintaining context, while the response metadata provides debugging information like trace IDs for AWS X-Ray monitoring.

Understanding Default Session Management

AgentCore automatically manages conversational sessions to enable stateful interactions by storing session information in your .bedrock_agentcore.yaml configuration file. When you invoke your agent again without specifying a session, it will read the configuration file to retrieve the same default session that was registered during the previous invocation:

Notice how this follow-up request uses the exact same session ID as our previous invocation, demonstrating the persistent session management:

This follow-up prompt operates within the same session (60cbb547-8046-46d4-809d-a65b458327b1) as our previous invocation, allowing your agent to maintain conversation context and memory. Your agent remembers the earlier discussion about Amazon Bedrock and can provide more detailed information, building upon the previous conversation thread. This creates a seamless user experience where conversations feel natural and contextual, rather than requiring users to repeat information or lose conversational flow between interactions.

Advanced Session Control

For sophisticated applications requiring multiple concurrent conversations, you can create and manage explicit sessions using custom session IDs. The agentcore CLI will still reference your .bedrock_agentcore.yaml configuration file to identify the target agent, but will use the specified session ID instead of the default session:

Custom session IDs follow a specific 33-character format and enable powerful use cases, such as multi-user applications where each customer maintains their own isolated conversation thread. This approach allows you to organize interactions by user identity, conversation topic, or any other logical grouping that makes sense for your application.

The explicit session management capability is particularly valuable for customer service scenarios, collaborative environments, or applications where multiple conversation contexts need to coexist without interfering with each other.

Conclusion

Outstanding progress! We've successfully mastered the complete deployment workflow for AgentCore applications, transforming our locally developed Strands agent into a production-ready AWS service. Through the AgentCore CLI and its central configuration file, we've learned how to configure projects, deploy to the cloud, monitor runtime health, and manage sophisticated conversational sessions that maintain state across multiple interactions.

This achievement represents a significant milestone in our agent development journey. Our intelligent agents are now running in AWS's enterprise-grade infrastructure, automatically scaling to handle real user traffic while preserving all the capabilities we built with Strands, Bedrock models, and knowledge bases. In the upcoming practice section, you'll apply these deployment skills hands-on, experiencing the seamless transition from local development to cloud production and mastering the tools that make AgentCore a powerful platform for enterprise AI applications.

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal