Hallucination
When an AI generates confident, plausible-sounding output that is factually wrong or completely made up.
Hallucination is the term for when an AI language model generates content that is incorrect, fabricated, or nonsensical — but presented with the same confidence as accurate information. The model does not "know" it is wrong; it is producing statistically plausible tokens, not verified facts.
Common examples in software development: the AI references a function or API method that does not exist, invents a library version number, or describes a framework behaviour that was changed in a recent update.
Hallucinations are more dangerous when they sound authoritative and specific. A vague answer is easy to catch; a confident wrong answer can get merged into production. The mitigation is verification: always check that AI-generated code actually runs, that API calls exist in the documentation, and that library names resolve to real packages.
Related Courses
Links open the course details directly on the Courses page.