0

The Geometry of Laziness: What Angles Reveal About AI Hallucinations

https://towardsdatascience.com/the-geometry-of-laziness-what-angles-reveal-about-ai-hallucinations/(towardsdatascience.com)
A geometric approach is proposed to detect hallucinations in Retrieval-Augmented Generation (RAG) systems by analyzing the relationships between text embeddings. This method, called the Semantic Grounding Index (SGI), treats the question, retrieved context, and generated response as points on a high-dimensional sphere. SGI measures whether a response moves semantically from the question toward the context, a sign of being grounded, or remains close to the question, a phenomenon termed "semantic laziness." This training-free technique effectively separates grounded responses from hallucinations based on their angular relationships and was validated across multiple embedding models. The method's effectiveness is predicted by the spherical triangle inequality and is specifically designed to detect grounding failures, not factual inaccuracies.
0 pointsby chrisf9 hours ago

Comments (0)

No comments yet. Be the first to comment!

Want to join the discussion?