Optimizing Interactions and Ethical Practices

You've mastered prompts and conversations — now let's refine your AI communication skills further. Have you ever found yourself asking for something budget-friendly but receiving a gourmet recipe fit for a Michelin-star restaurant? These moments highlight why it's key to recognize off-target AI outputs and adjust your prompts accordingly. Let's explore how.

Identifying When AI Outputs Are Off-Target

AI isn't perfect. Watch for these red flags as you collaborate with it:

  • Irrelevant Answers: Responses ignore key parts of your prompt.
    • Example: Asking for budget-friendly meal plans but getting gourmet recipes.
  • Factual Errors ("Hallucinations"): AI invents plausible-sounding but incorrect details.
    • Example: Citing a non-existent study or misstating historical dates.
  • Repetition: The AI recycles phrases or ideas without adding value.
  • Tone Mismatch: Casual responses to formal requests, or vice versa.

Pro Tip: Always verify critical information (e.g., medical or legal advice) with trusted sources.

Where AI Is Most Likely to Hallucinate

Understanding where hallucinations commonly occur helps you stay alert:

  • Obscure or Recent Events: AI may struggle with niche topics, very new research, or developing news stories, especially if its training data is outdated. You might expect great accuracy from LLMs when chatting about common knowledge, but once you start digging deeper into the topic, stay cautious.
  • Highly Specific Facts: Requests for exact numbers, dates, or names (e.g., “List five companies founded in April 2023”) can prompt fabricated details.
  • Math: LLM is not a calculator and will struggle with math. Of course, it will give you a correct answer to very simple math questions, but you can never trust an LLM to do any actual calculations.
  • Citations and Sources: The AI might invent URLs, article titles, or attribute quotes/research to nonexistent authors.
  • Complex Technical or Legal Topics: Lacking true subject matter expertise, AI can confidently provide incorrect or oversimplified explanations.
  • Emerging Terminology: Terms or jargon that developed after the AI’s last training cutoff can be misunderstood or misrepresented. Hint: most LLMs "know" the date of their training data cutoff, so you can simply ask AI: "What is the latest date you have data about?".

If accuracy is critical, investigate the facts or references provided before relying on or sharing the answer.

Strategies for Refining Prompts

When results miss the mark, give your AI clearer directions. A few small tweaks can make a big difference:

  1. Clarify Scope:

    • Original: Explain climate change.
    • Improved: Explain three human activities driving climate change, using layman's terms.
  2. Add Guardrails:

    • Original: Write a job description.
    • Improved: Write a job description for a social media manager. Exclude requirements related to graphic design.
  3. Break Tasks Into Steps:

    • Original: Plan a conference.
    • Improved: List 6 steps to plan a 200-person tech conference, prioritizing attendee engagement and a budget under $50K.
  4. Request Citations:

    • Example: Suggest peer-reviewed studies about AI ethics published after 2022. Provide URLs if available. Remember: AI can easily come up with a peer-reviewed article that has actually never existed and is a complete fake. Or, AI might point you to existing article, but misunderstand it's contents. Always double check the sources.
Ethical Considerations and Best Practices

Remember: AI should support, not undermine, good judgment. Here's how you can keep it responsible:

  • Avoid Bias Reinforcement:
    • AI can reflect biases in training data. Always review outputs for stereotypes (e.g., gender roles in job descriptions).
  • Protect Privacy:
    • Never share sensitive personal or company data in prompts.
  • Combat Misinformation:
    • Don't use AI to generate false content (e.g., fake reviews).
  • Transparency:
    • Disclose AI use when appropriate (e.g., This email was drafted with AI assistance).

Case Study:
Picture a marketing email claiming a product "boosts productivity by 300%" without evidence. The ethical response: remove the unverified claim or clearly back it with data. This approach maintains trust and prevents misleading content.

Key Takeaways
  • Always verify critical AI outputs.
  • Treat prompt refinement as a collaboration — guide your AI with clarity.
  • Stay ethical — AI is a powerful ally, but human oversight remains essential.

Ready to put these principles into action? Let's practice!

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal