Introduction

You've built a complete automation foundation: SessionStart hooks for initial context, PreToolUse hooks for safety, and PostToolUse hooks for automatic actions. Now we're tackling the fourth pillar: hooks that enhance your prompts before they reach Claude.

UserPromptSubmit hooks run after you write a prompt but before Claude sees it. These hooks automatically inject relevant context, enforce prompt standards, and block inappropriate requests. We'll build context files storing reusable guidelines, create hooks that intelligently inject them based on prompt content, and add safety checks that prevent sensitive operations.

By the end, your prompts will automatically include exactly the right information, making Claude's responses more accurate and consistent.

Understanding UserPromptSubmit Hooks

Imagine having a helpful assistant who reads your emails before you hit send. They say:

  • "Hey, you mentioned the budget - should I attach the spreadsheet?"
  • "You're writing to a client - want me to include our standard terms?"
  • "You referenced the contract - let me pull up those details for you."

UserPromptSubmit hooks work exactly the same way: they read what you're asking Claude and automatically attach relevant project guidelines, standards, or documentation before Claude sees your request.

The magic moment: You type a quick prompt like "Create a TypeScript component." The hook reads this, thinks "they mentioned TypeScript - let me inject our TypeScript coding standards," and Claude receives both your prompt AND your project's TypeScript guidelines automatically.

You write short, natural prompts. Claude receives them fully enriched with exactly the context needed. It's like having a research assistant who pre-gathers all relevant materials before you start working.

Creating Reusable Context Libraries

Before we can inject context, we need somewhere to store it. Think of .claude/contexts/ as your project's reference library - a collection of guides, standards, and documentation that hooks can pull from automatically.

Each file is like a reference card focused on one topic, written in clear Markdown. Let's create our first one, typescript-standards.md:

Why keep it concise? When this gets injected into Claude's context, we want clear, scannable guidelines - not a novel. Think of it as a quick reference card, not a textbook. Claude can quickly read and apply these standards without getting overwhelmed.

Building Your First Context Injection Hook

Now we'll build a hook that detects when these standards would be helpful:

Reading this in plain English:

  1. Read the user's prompt from the JSON data
  2. Check if it mentions "typescript" OR "component" OR "react" (case-insensitive)
  3. If yes, output a header and the full TypeScript standards file
  4. Everything we output (via echo and cat) gets prepended to Claude's context automatically

The grep -qi is our pattern matcher - like playing "Where's Waldo?" but searching for keywords. The -q means "quiet" (just yes/no), -i means "case-insensitive" (matches "TypeScript", "typescript", "TYPESCRIPT"), and \| means "or" (with a backslash to escape it in the pattern).

Configuring the UserPromptSubmit Hook

Activate context injection in .claude/settings.json:

Note: UserPromptSubmit hooks do not support matchers - the matcher field is silently ignored. All configured hooks run on every prompt submission. We include the empty matcher field here to maintain consistent structure with other hook types, but you can omit it entirely if preferred.

The timeout ensures injection completes quickly (5 seconds is generous - most text file reads take milliseconds).

Letting Claude Choose the Right Context

Simple pattern matching works like a spell-checker - it follows exact rules ("if you see 'TypeScript', inject typescript-standards.md"). But what if you have many context files? Writing rules for every combination becomes tedious and error-prone.

Here's a smarter approach: ask Claude itself which contexts are relevant. This is like having a grammar checker that understands context - it knows whether "their", "there", or "they're" is correct based on sentence meaning, not just spelling rules.

What's happening here?

  1. We call the claude CLI (a separate Claude instance) with a question
  2. We list all available context files and describe what each contains
  3. We ask for a structured JSON response with just the relevant filenames in a files array
  4. Claude analyzes the prompt's intent and returns which files would be helpful
  5. We parse the JSON response to extract the list of files

Real-world example: You ask "Add tests for the login button."

  • Claude reads your prompt
  • Sees "tests" and "button"
  • Returns {"files": ["testing-standards.md", "design-system.md"]}
  • Both guides get injected automatically

This scales beautifully - add 20 more context files, and Claude intelligently picks only the 2-3 that matter for each specific prompt. No need to write complex matching rules.

Security Considerations for LLM-Based Context Selection

This AI-powered context selection is powerful for learning and personal workflows, but has important limitations for production or shared environments:

Key Security Issues:

  • Prompt Injection Risk: The $PROMPT variable is interpolated directly. A crafted prompt could manipulate which contexts get injected
  • Unreliable Output Format: LLM responses may not always match expected JSON structure, especially under rate limits or API errors
  • Information Disclosure: In multi-user environments, attackers could probe which context files exist and what they contain

Production Best Practices:

  • Use deterministic keyword matching as your primary context selection method
  • If using LLM selection, strictly validate JSON structure and sanitize file paths
  • Implement fail-safe defaults: if output format is unexpected, inject nothing
  • Audit context files to ensure none contain sensitive credentials or proprietary information
  • Consider whether context injection should be user-controlled rather than automatic

For personal development workflows, the convenience often outweighs these risks. For shared or production systems, use rule-based selection as your foundation.

Processing Claude's Recommendations and Loading Contexts

After Claude selects relevant files, we load and output each one:

Breaking down the mechanics:

  • The Python code extracts each filename from Claude's JSON array and joins them with newlines (one filename per line)
  • The while IFS= read -r file loop reads each line (each filename)
  • We check the file exists before outputting it
  • The <<< syntax is called a "here-string" - it feeds the file list into the loop
  • Each context gets a clear header showing which file it came from

This scales elegantly: add more context files to .claude/contexts/, update the list in the hook's prompt, and Claude automatically selects the right ones. No code changes needed in the loading logic.

Preventing Dangerous Requests

Beyond enriching prompts, UserPromptSubmit hooks can act as guardrails - blocking prompts that request risky operations before Claude even sees them.

Think of it like a spam filter for dangerous commands:

How this protects you:

  • grep -Eqi searches for multiple dangerous patterns: -E enables extended regex (so we can use | for "or"), -q means quiet (just yes/no), -i means case-insensitive
  • The pattern "delete database|drop table|rm -rf /" checks for three very dangerous operations
  • If found, we write a warning message to stderr (the >&2 means "show this in the terminal")
  • exit 2 is the magic code that says "block this prompt entirely" - just like in PreToolUse hooks
  • Claude never sees prompts that fail this check - they're stopped at the gate

Real scenario: You accidentally ask "Delete all database tables and start fresh." The hook catches this, blocks it, and warns you. Crisis averted before any damage is done.

This is especially valuable when you're tired, distracted, or working late - the hook provides a safety net for those moments when you might request something dangerous without fully thinking it through.

Combining Multiple UserPromptSubmit Hooks

You can configure multiple UserPromptSubmit hooks that run in parallel:

Important: These hooks execute in parallel, not sequentially. Both hooks receive the same original prompt and run simultaneously. If any hook exits with code 2, the entire submission is blocked - the prompt never reaches Claude. This parallel execution ensures hooks can independently validate or enrich the prompt without depending on each other's output, providing both safety and intelligence like having multiple assistants review your prompt at once.

Conclusion and Next Steps

Outstanding work! You've now mastered all four core hook types: SessionStart for initialization, PreToolUse for operation validation, PostToolUse for automatic actions, and UserPromptSubmit for prompt enhancement. Together, these hooks create a complete automation layer that makes Claude smarter, safer, and more consistent.

We built simple context injection that detects patterns, intelligent selection that uses Claude to pick relevant contexts, and safety hooks that block dangerous requests. UserPromptSubmit works invisibly: you write natural prompts, and hooks ensure Claude has exactly the right information to respond effectively.

These patterns extend to countless use cases: injecting API documentation, adding architecture diagrams, including test coverage reports, or enforcing ticket references. The hooks run instantly (within milliseconds to a few seconds), maintain context consistency, and require no manual effort.

Now it's your turn to build these intelligent prompt enhancers and see how they transform your Claude interactions!

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal