Welcome back to Putting Bedrock Models to Action with Strands Agents! In our previous lessons, you've successfully built intelligent agents and enhanced them with calculation and custom tools. Now, in this third unit, we're taking a significant leap forward by connecting your agents to Amazon Bedrock Knowledge Bases, unlocking the power of enterprise-scale information retrieval and context-aware responses.
This lesson transforms your AWS Technical Assistant from a tool-equipped agent into a comprehensive knowledge system capable of searching through vast document repositories and providing detailed, citation-backed responses. We'll explore the retrieve
tool from strands_tools
, master advanced retrieval parameters, and implement sophisticated search configurations that allow precise targeting of specific information within your knowledge bases.
By the end of this unit, your agent will seamlessly combine its existing tool capabilities with powerful document retrieval, enabling it to answer complex questions about your organization's documentation, policies, and technical specifications. This integration represents the foundation of modern Retrieval-Augmented Generation (RAG) systems, where AI reasoning meets curated knowledge repositories.
Knowledge Base integration revolutionizes agent capabilities by providing access to curated document collections that extend far beyond the training data of foundation models. When agents connect to knowledge bases, they gain the ability to retrieve relevant information from your organization's documentation, technical manuals, policy documents, and other structured knowledge sources in real time during conversations.
The retrieve
tool operates through semantic similarity search, where your queries are converted into vector representations and matched against pre-indexed document embeddings stored in your knowledge base. This approach enables sophisticated information discovery that goes beyond simple keyword matching, finding conceptually relevant content even when exact terms don't appear in the source documents.
Note that we will be reusing the same Bedrock Knowledge Base we developed in the previous course.
Let's begin by configuring our agent with both familiar tools and the new knowledge base retrieval capability, building upon the foundational setup patterns you've mastered in previous lessons.
As you recall from previous units, we're maintaining the same model configuration and guardrail setup patterns that ensure secure, compliant AI operations. The new additions here include the retrieve
import from strands_tools
and the environment variables for KNOWLEDGE_BASE_ID
and REGION
, which specify which knowledge base your agent should connect to and in which AWS region it's deployed.
With our configuration variables established, we can now create an agent that combines mathematical capabilities with powerful document retrieval functionality.
This agent configuration now includes the retrieve
tool alongside the familiar calculator
, creating a hybrid system capable of both computational tasks and knowledge base searches. The agent automatically gains awareness of the retrieval capabilities and will intelligently decide when to search your knowledge base versus when to perform calculations, depending on the context of user queries.
The seamless integration of multiple tools demonstrates the power of the Strands Agents framework, where each tool becomes part of the agent's reasoning toolkit without requiring complex orchestration code on your part.
Now let's explore the fundamental retrieval operation using your knowledge base to search for information about a specific topic.
This basic retrieval call searches for documents related to "Nimbus Assist" using default parameters. The agent.tool.retrieve()
method provides direct access to the retrieval functionality, allowing you to perform targeted searches and examine the raw results before they're processed into conversational responses.
The output reveals the successful retrieval of relevant documentation, where the first document has a similarity score of 0.7522, indicating high relevance to the query. The default configuration returned 10 results with scores above 0.4, providing comprehensive coverage of available information while maintaining reasonable quality thresholds. Note that the full output is truncated.
Let's enhance our retrieval capabilities by specifying custom parameters that provide more precise control over search behavior and result quality.
This advanced configuration demonstrates precise control over retrieval behavior: numberOfResults=3
limits the response to the top 3 most relevant documents, while score=0.7
establishes a higher relevance threshold that filters out lower-quality matches. The explicit knowledgeBaseId
and region
parameters ensure your queries target the correct knowledge base deployment for optimal performance and accuracy.
Notice that with the higher score threshold, only 2 results met the 0.7 relevance criteria, demonstrating how parameter tuning directly impacts result quality and quantity. This precision control allows you to balance comprehensive coverage and high-confidence results based on your specific application requirements.
Finally, let's witness the complete integration by allowing your agent to autonomously use its retrieval capabilities within a natural conversation flow.
This conversational query triggers the agent's autonomous decision-making process, where it determines that knowledge base retrieval is necessary to provide an accurate and comprehensive response about Nimbus Assist. The agent seamlessly orchestrates the retrieval operation behind the scenes while maintaining natural conversation flow.
You've successfully transformed your agent into a sophisticated knowledge-retrieval system capable of accessing and reasoning over enterprise documentation repositories. The integration of the retrieve
tool with advanced parameter configuration provides the foundation for building production-ready RAG applications that can serve as intelligent knowledge assistants, technical documentation systems, and customer support automation platforms.
This mastery of knowledge base integration represents a crucial milestone in your journey toward building enterprise-scale AI systems that combine conversational intelligence with organizational knowledge. The upcoming practice exercises will challenge you to further explore retrieval optimization and discover the full potential of knowledge-enhanced agent systems.
