Author: Vladisav Jovanović
Status: Preprint
Version: Version 1.0 — Official Preprint (February 2026)
“AI hallucination” is often treated as a simple accuracy bug: the model produces false content. This paper argues the deeper problem is structural: a new epistemic regime in which coherence is mistaken for contact. Large language models don’t just generate occasional errors; they make it easy to outsource judgment itself, weakening the human witness and lowering the priority of truth beneath persuasive completion. The paper frames hallucination across model, witness, and institution, and uses an equation-as-instrument to show the core failure: coherence unbound by source/answerability, a hard-floor constraint (non-negotiable pushback), and repair (the capacity to retract and update under cost). From this view, citations, retrieval, and disclaimers can improve surface quality without restoring obligation—adding information without restoring consequence and revision over time.
structural intelligence; AI hallucination; large language models (LLMs); generative AI; epistemology; answerability; hard-floor constraint; truth; epistemic integrity; grounding; repair / revision