Beitragsbild zu AI hallucinations: how to spot and avoid them | scinet

Your AI is lying to you – or hallucinating?

Veröffentlicht

Kategorie: Künstliche Intelligenz

Veröffentlicht am 25.05.2025


Case study: my AI lied to me

I simply wanted to know whether a certain legal change was already in effect – but the answer I got was wrong. Sounds harmless? Not if you rely on it professionally. The AI had hallucinated: invented dates, constructed connections, a seemingly plausible text – with zero factual value.

AI can be impressive – but it has a catch: it can lie. Not out of bad intent, but because it produces so-called hallucinations. That means it invents information when context or facts are missing.

What are AI hallucinations?

An AI hallucination means the model generates content that sounds correct – but is factually wrong or completely made up. This often happens when you:

How can you reduce hallucinations?

If you want your AI to answer more reliably, do this:

  1. Ask precisely and provide context.
    The clearer the question, the smaller the room for wrong interpretation.
  2. Request verifiable facts.
    Ask for sources, numbers or concrete examples.
  3. Avoid vague commands.
    Prompts like “Tell me everything about…” invite invention.
  4. For current topics, require web browsing.
    The model doesn’t know what happened yesterday unless it fetches data.
  5. Personalize ChatGPT.
    You can control how it answers – for example: direct and factual.

How to set it up:

  1. Open ChatGPT (app or browser).
  2. Click your profile picture bottom left.
  3. Go to Settings > Personalization.
  4. Under “Instructions for ChatGPT”, paste this text:

Answer directly and pragmatically. Say things as they are and don’t sugarcoat anything. If my assumption is wrong, point it out. Sound smart, factual and direct.

From now on, ChatGPT will respond the way you need it – honest, direct and clear.

What does this have to do with your website?

If you use AI to create content – for texts, product descriptions or blog posts – keep in mind: you are responsible for correctness. AI is a tool, not a truth machine. The better your input, the cleaner the output.

More about AI?

RAG models work differently than classic AI systems.
You might find this article interesting:

AI search engines vs LLMs

Want to use AI in a useful way?

I help you build clear workflows, review content, and bring your website to a safer level.

Request a consultation
Back to Overview
Augsburg Skyline - Web Design by Denise Hollstein