Rule-based hallucination detector. Paste a source passage and an LLM-generated answer — every claim is verified against the source using 13 auditable rules. Fully deterministic, zero API calls, runs in your browser.
Source · Ground Truth01
Treated as the only authority.
Generated Answer · Claims02
Each sentence becomes one atomic claim.
Examples
Awaiting input. Press Verify Claims to begin grounding analysis.
What this tool will NOT catch
Paraphrases (same meaning, different words). Semantic reasoning (implications not in the source). Cross-sentence contradictions. World-knowledge errors when the source is silent. Subtle numerical errors in units and scales. This is a detector for some kinds of errors — not all.