Features
- Chain-of-Thought Validation - Parse and verify reasoning steps
- Result Caching - LRU + Redis for repeated queries
- Multi-Provider Support - Anthropic, Azure OpenAI, Google Gemini, OpenAI
- Semantic Fact Extraction - Identify verifiable claims
Usage
Multi-Provider Verification
Caching
The engine caches results to avoid redundant LLM calls:Formula equivalence
When verifying whether two arithmetic formulas produce the same result, the engine uses a safe AST-based evaluator instead of Python’seval(). The evaluator only allows basic arithmetic operators (+, -, *, /, **) and numeric literals — any other expression is rejected. This prevents code-injection risks while still supporting numeric fallback checks during reasoning validation.