Welcome back! So far, you have learned how to prepare documents, chunk them, create embeddings, and use those embeddings to find relevant information for a given email. In this lesson, you will see how to use that retrieved context to make your smart email assistant even more helpful and accurate.
Adding context to your agent's prompt allows it to generate responses that are more relevant and personalized. This is a key step in building a truly smart assistant. By the end of this lesson, you will know how to pass the retrieved context to your agent and update your workflow to use this information when generating summaries and replies.
Let's quickly remind ourselves how we get relevant context for an email. In the previous lesson, you learned to:
- Embed the email using a model like OpenAI's embedding model.
- Search a vector database (like LibSQL) for the most similar document chunks.
- Collect the top results as your context.
This process is called Retrieval-Augmented Generation (RAG). It helps your agent "remember" important information from your documents when responding to emails.
Here's how the entire RAG pipeline works from start to finish:
This diagram shows the complete flow:
- Incoming Email — The user's email arrives
- Embed Email — The email text is converted into a vector using an embedding model
- Vector DB Search — The email vector is compared against all document chunks in the database to find the most similar ones
- Top Chunks Retrieved — The most relevant chunks (usually 3-5) are selected based on similarity scores
- Agent Prompt — The original email and the retrieved chunks are combined into a single prompt
- Agent Response — The agent generates a contextual reply based on both the email and the retrieved information
Instead of manually retrieving context, we can define a vector query tool that the agent can use to fetch relevant information from your knowledge base. Here's how you define the tool:
vectorStoreName: "storage"— This tells the tool which vector database to search in. Think of it as the "folder" where all your document chunks are stored.indexName: "embedding"— This specifies which index inside your vector store to use. An index is like a catalog that helps the tool quickly find similar chunks.model: openai.embedding("text-embedding-3-small")— This is the embedding model used to turn your queries into vectors. The model ensures that both your queries and your documents are represented in the same "language" for similarity search.
By setting up the tool this way, you give your agent a powerful, reusable way to search your knowledge base for relevant information—no manual searching required!
Now, let's add the tool to your agent and update the agent's instructions to explicitly call the tool when context is needed.
To use our new tool, we add it to the tools property when creating our Agent. This makes the vectorQueryTool available for the agent to call whenever it needs to retrieve relevant information from the knowledge base.
We also update the agent's instructions to explicitly tell it to use the vectorQueryTool before answering any questions. The instructions make it clear that the agent should always search for context using the tool and base its responses only on the information retrieved. This ensures that the agent's replies are grounded in the most relevant and up-to-date knowledge available.
Let's look at a basic sample to see how context might change the agent's reply.
Without context:
Prompt:
Email:
"Can you send me the latest project update?"Agent Reply:
"Sure, I will send you the latest update soon."
With context (where the agent uses the tool to retrieve a recent project summary):
Prompt:
Email:
"Can you send me the latest project update?"Agent Reply:
"The latest update: The project reached phase 2 last week. The team completed initial testing and is now working on feature X. Let me know if you need more details."
As you can see, the reply with context is much more informative and specific.
In this lesson, you learned how to let your agent use tools to automatically retrieve relevant context and generate more accurate and personalized responses. By including a vector search tool, your smart email assistant can provide much better answers with less manual work.
Next, you will get hands-on practice with these steps. You'll try out passing context to your agent and see the improvements for yourself. Great job making it this far — let's keep going!
