Grounding Verification

HalluLens

Rule-based hallucination detector. Paste a source passage and an LLM-generated answer — every claim is verified against the source using 13 auditable rules. Fully deterministic, zero API calls, runs in your browser.

Source · Ground Truth01
Treated as the only authority.
Generated Answer · Claims02
Each sentence becomes one atomic claim.
Examples
Awaiting input. Press Verify Claims to begin grounding analysis.
What this tool will NOT catch

Paraphrases (same meaning, different words). Semantic reasoning (implications not in the source). Cross-sentence contradictions. World-knowledge errors when the source is silent. Subtle numerical errors in units and scales. This is a detector for some kinds of errors — not all.

{ }
Open API & MCP Ready
Free JSON APIs for querying provider status, model availability, and benchmarks. Integrate with MCP tools and AI agents.
View Docs →
11:48:08
Monday, April 20, 2026
aistatus.cc