Section 1 - Instruction

You've learned how AI uses tools and follows system prompts. But what happens when AI says "no"?

AI refusals aren't glitches - they're deliberate safety features programmed by developers to prevent harmful outputs.

Engagement Message

Have you ever had an AI refuse to answer your question?

Section 2 - Instruction

AI refuses requests for three main reasons: safety guardrails, technical limitations, and policy compliance.

Safety refusals prevent harmful content like hate speech or dangerous instructions. Technical refusals happen when AI lacks knowledge or capability.

Engagement Message

Which type of refusal would frustrate you more - safety or technical?

Section 3 - Instruction

When AI says "I can't help with that because it may be harmful," that's an ethical refusal based on programmed values.

These aren't the AI's personal moral choices - they're rules set by human programmers and ethicists.

Engagement Message

Should AI have more or fewer restrictions on what it can discuss?

Section 4 - Instruction

Technical refusals sound like "As a language model, I don't have access to current news" or "I can't perform physical actions."

These happen when requests exceed the AI's actual capabilities or knowledge cutoff date.

Engagement Message

What's a reasonable technical limitation you'd expect AI to have?

Section 5 - Instruction

Sometimes AI gives disclaimers instead of full refusals: "I'm not a doctor, but here's general information..."

This acknowledges limitations while still trying to be helpful within appropriate boundaries.

Engagement Message

Do you prefer AI that admits uncertainty or always tries to answer?

Section 6 - Instruction

The "reject option" means AI can say "I don't know" when uncertain rather than guessing incorrectly.

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal