Each guard uses a deterministic engine to verify a specific aspect of legal documents.
1. DeadlineGuard 📅
Purpose: Verify date calculations in contracts.
The Problem
LLMs frequently miscalculate deadlines:
- Confuse business days vs calendar days
- Ignore leap years
- Forget jurisdiction-specific holidays
The Solution
from qwed_legal import DeadlineGuard
guard = DeadlineGuard(country="US", state="CA")
result = guard.verify(
signing_date="2026-01-15",
term="30 business days",
claimed_deadline="2026-02-14"
)
print(result.verified) # False
print(result.computed_deadline) # 2026-02-27
print(result.difference_days) # 13
Parameters
The date the contract was signed (ISO format or natural language).
The term description (e.g., “30 days”, “30 business days”, “2 weeks”, “3 months”, “1 year”).
The deadline claimed by the LLM.
Allow +/- this many days when verifying the deadline. Useful for accommodating minor rounding differences.
Features
| Feature | Description |
|---|
| Business vs Calendar | Automatically detects “business days” vs “days” |
| Holiday Support | 200+ countries via python-holidays |
| Leap Years | Handles Feb 29 correctly |
| Natural Language | Parses “2 weeks”, “3 months”, “1 year” |
Calculate business days between dates
guard = DeadlineGuard(country="US")
business_days = guard.calculate_business_days_between(
start_date="2026-01-15",
end_date="2026-02-14"
)
print(business_days) # Number of business days excluding weekends and holidays
2. LiabilityGuard 💰
Purpose: Verify liability cap and indemnity calculations.
The Problem
LLMs get percentage math wrong:
- “200% of 5M=15M” ❌ (Should be $10M)
- Float precision errors on large amounts
- Tiered liability miscalculations
The Solution
from qwed_legal import LiabilityGuard
guard = LiabilityGuard()
result = guard.verify_cap(
contract_value=5_000_000,
cap_percentage=200,
claimed_cap=15_000_000
)
print(result.verified) # False
print(result.computed_cap) # 10,000,000
print(result.difference) # 5,000,000
Additional Methods
# Tiered liability
result = guard.verify_tiered_liability(
tiers=[
{"base": 1_000_000, "percentage": 100},
{"base": 500_000, "percentage": 50},
],
claimed_total=1_250_000 # ✅ Correct: 1M + 250K
)
# Indemnity limit (3x annual fee)
result = guard.verify_indemnity_limit(
annual_fee=100_000,
multiplier=3,
claimed_limit=300_000 # ✅ Correct
)
3. ClauseGuard ⚖️
Purpose: Detect contradictory clauses using text heuristics and optional Z3 verification.
The problem
LLMs miss logical contradictions:
- “Seller may terminate with 30 days notice”
- “Neither party may terminate before 90 days”
These clauses conflict for days 30-90!
The solution
The primary check_consistency() method uses text heuristics to detect conflicts. For formal logic verification, use verify_using_z3().
from qwed_legal import ClauseGuard
guard = ClauseGuard()
result = guard.check_consistency([
"Seller may terminate with 30 days notice",
"Neither party may terminate before 90 days",
"Seller may terminate immediately upon breach"
])
print(result.consistent) # False
print(result.conflicts)
# [(0, 1, "Termination notice (30 days) conflicts with minimum term (90 days)")]
Detection types
| Conflict Type | Description |
|---|
| Termination | Notice period vs minimum term |
| Permission/Prohibition | ”May” vs “May not” |
| Exclusivity | Multiple exclusive rights |
Z3-based verification
For power users who want to define precise logical constraints:
result = guard.verify_using_z3([
"constraint_a",
"constraint_b",
])
print(result.consistent) # True if constraints are satisfiable
print(result.message) # "✅ VERIFIED: Constraints are satisfiable."
4. CitationGuard 📚
Purpose: Verify legal citations are properly formatted and potentially real.
The Problem
The Mata v. Avianca scandal: Lawyers used ChatGPT, which cited 6 fake court cases. They were fined $5,000 and sanctioned.
The Solution
from qwed_legal import CitationGuard
guard = CitationGuard()
# Valid citation
result = guard.verify("Brown v. Board of Education, 347 U.S. 483 (1954)")
print(result.valid) # True
print(result.parsed_components)
# {'volume': 347, 'reporter': 'U.S.', 'page': '483'}
# Invalid citation (fake reporter)
result = guard.verify("Smith v. Jones, 999 FAKE 123 (2020)")
print(result.valid) # False
print(result.issues) # ["Unknown reporter"]
Supported citation patterns
| Pattern | Format | Example |
|---|
| US Supreme Court | volume U.S. page | 347 U.S. 483 |
| US Federal | volume F./F.2d/F.3d page | 500 F.3d 120 |
| UK Neutral | [year] court number | [2023] UKSC 10 |
| India AIR | AIR year court page | AIR 2020 SC 100 |
Batch Verification
result = guard.verify_batch([
"Brown v. Board, 347 U.S. 483 (1954)",
"Fake v. Case, 999 X.Y.Z. 123",
])
print(result.total) # 2
print(result.valid) # 1
print(result.invalid) # 1
Statute Citations
result = guard.check_statute_citation("42 U.S.C. § 1983")
print(result.valid) # True
print(result.parsed_components)
# {'title': 42, 'code': 'U.S.C.', 'section': '1983'}
5. JurisdictionGuard 🌍
Purpose: Verify choice of law and forum selection clauses.
The Problem
LLMs miss jurisdiction conflicts:
- Governing law in one country, forum in another
- Missing CISG applicability warnings
- Cross-border legal system mismatches
The Solution
from qwed_legal import JurisdictionGuard
guard = JurisdictionGuard()
result = guard.verify_choice_of_law(
parties_countries=["US", "UK"],
governing_law="Delaware",
forum="London"
)
print(result.verified) # False - mismatch detected
print(result.conflicts) # ["Governing law 'Delaware' (US state) but forum 'London' is non-US..."]
Features
| Feature | Description |
|---|
| Choice of Law | Validates governing law makes sense for parties |
| Forum Selection | Checks forum vs governing law alignment |
| CISG Detection | Warns about international sale of goods conventions |
| Convention Check | Verifies Hague, NY Convention applicability |
Convention Check
result = guard.check_convention_applicability(
parties_countries=["US", "DE"],
convention="CISG"
)
print(result.verified) # True - both are CISG members
6. StatuteOfLimitationsGuard ⏰
Purpose: Verify claim limitation periods by jurisdiction.
The Problem
LLMs don’t track jurisdiction-specific limitation periods:
- California breach of contract: 4 years
- New York breach of contract: 6 years
- Different periods for negligence, fraud, etc.
The Solution
from qwed_legal import StatuteOfLimitationsGuard
guard = StatuteOfLimitationsGuard()
result = guard.verify(
claim_type="breach_of_contract",
jurisdiction="California",
incident_date="2020-01-15",
filing_date="2026-06-01"
)
print(result.verified) # False - 4 year limit exceeded!
print(result.expiration_date) # 2024-01-15
print(result.days_remaining) # -867 (negative = expired)
Supported jurisdictions
12 jurisdictions are supported with periods for 10 claim types.
| Jurisdiction | Breach of Contract | Negligence | Fraud |
|---|
| California | 4 years | 2 years | 3 years |
| New York | 6 years | 3 years | 6 years |
| Texas | 4 years | 2 years | 4 years |
| Delaware | 3 years | 2 years | 3 years |
| Florida | 5 years | 4 years | 4 years |
| Illinois | 5 years | 2 years | 5 years |
| UK/England | 6 years | 6 years | 6 years |
| Germany | 3 years | 3 years | 10 years |
| France | 5 years | 5 years | 5 years |
| Australia | 6 years | 6 years | 6 years |
| India | 3 years | 3 years | 3 years |
| Canada | 2 years | 2 years | 6 years |
Supported claim types
breach_of_contract, breach_of_warranty, negligence, professional_malpractice, fraud, personal_injury, property_damage, employment, product_liability, defamation
Compare Jurisdictions
comparison = guard.compare_jurisdictions(
"breach_of_contract",
["California", "New York", "Delaware"]
)
# {'California': 4.0, 'New York': 6.0, 'Delaware': 3.0}
7. IRACGuard 📝
Purpose: Verify that legal reasoning follows the IRAC framework (Issue, Rule, Application, Conclusion).
The Problem
LLMs produce legal advice that lacks structured reasoning:
- Missing clear identification of the legal issue
- No citation of applicable rules or statutes
- Conclusions without proper application of law to facts
The Solution
from qwed_legal import IRACGuard
guard = IRACGuard()
llm_output = """
Issue: Whether the defendant breached the employment contract.
Rule: Under California Labor Code § 2922, employment is presumed at-will.
Application: The defendant terminated employment without the 30-day notice
required by the contract, which modified the at-will presumption.
Conclusion: The defendant breached the employment contract.
"""
result = guard.verify_structure(llm_output)
print(result["verified"]) # True
print(result["components"]) # {'issue': '...', 'rule': '...', 'application': '...', 'conclusion': '...'}
Detection Types
| Check | Description |
|---|
| Structure | Verifies all 4 IRAC components are present |
| Logical Disconnect | Detects when Application doesn’t reference the Rule |
| Missing Steps | Identifies which IRAC components are missing |
Error Response
result = guard.verify_structure("The defendant should pay damages.")
print(result["verified"]) # False
print(result["error"]) # "Failed Reasoned Elaboration. Missing steps: issue, rule, application, conclusion..."
print(result["missing"]) # ['issue', 'rule', 'application', 'conclusion']
8. FairnessGuard ⚖️
Purpose: Detect algorithmic bias using counterfactual testing.
The Problem
AI legal systems can exhibit bias based on protected attributes:
- Different sentencing recommendations based on gender
- Inconsistent contract assessments based on party names
- Discriminatory loan approval reasoning
The Solution
from qwed_legal import FairnessGuard
# Requires an LLM client for counterfactual generation
guard = FairnessGuard(llm_client=my_llm)
result = guard.verify_decision_fairness(
original_prompt="Should John Smith receive parole given his rehabilitation record?",
original_decision="Parole recommended based on positive rehabilitation.",
protected_attribute_swap={"John": "Jane", "his": "her"}
)
print(result["verified"]) # True if decision is consistent
print(result["status"]) # "FAIRNESS_VERIFIED" or "JUDICIAL_BIAS_DETECTED"
How It Works
- Early exit - If
protected_attribute_swap is empty ({}), returns immediately with NO_SWAP_REQUIRED without calling the LLM
- Counterfactual Generation - Swaps protected attributes (names, pronouns) while preserving case
- Re-evaluation - Runs the modified prompt through the LLM
- Deterministic Comparison - Checks if outcomes match exactly
Detection Types
| Status | Description |
|---|
FAIRNESS_VERIFIED | Decision unchanged after attribute swap |
JUDICIAL_BIAS_DETECTED | Decision changed based on protected attributes |
NO_SWAP_REQUIRED | No protected attributes to swap (empty dict passed) |
FairnessGuard requires an LLM client at initialization. Without it, verify_decision_fairness() will raise a ValueError.
9. ContradictionGuard 🔍
Purpose: Detect logical contradictions in contracts using Z3 SMT Solver.
The Problem
Contracts can contain mathematically impossible combinations:
- “Liability capped at 10,000"+"Minimumpenaltyof50,000”
- “Term is exactly 12 months” + “Minimum duration of 24 months”
Text-based heuristics (ClauseGuard) miss these formal logic conflicts.
The Solution
from qwed_legal import ContradictionGuard, Clause
guard = ContradictionGuard()
clauses = [
Clause(id="1", text="Liability capped at 10000", category="LIABILITY", value=10000),
Clause(id="2", text="Penalty shall be 50000", category="LIABILITY", value=50000),
]
result = guard.verify_consistency(clauses)
print(result["verified"]) # False
print(result["message"]) # "❌ LOGIC CONTRADICTION: Clauses are mutually exclusive..."
Clause Structure
The Clause dataclass requires:
| Field | Type | Description |
|---|
id | str | Unique clause identifier |
text | str | Human-readable clause text |
category | str | DURATION, LIABILITY, or TERMINATION |
value | int | Normalized numeric value (days, dollars, etc.) |
Supported Categories
| Category | Detects |
|---|
DURATION | Conflicting term lengths (exact vs min/max) |
LIABILITY | Cap vs penalty contradictions |
Z3 vs ClauseGuard
| Feature | ClauseGuard | ContradictionGuard |
|---|
| Input | Raw text strings | Structured Clause objects |
| Method | Text heuristics | Z3 SMT Solver |
| Detects | Permission conflicts | Mathematical impossibilities |
| Use Case | Quick checks | Formal verification |
10. ProvenanceGuard 🔗
Purpose: Verify AI-generated content carries proper provenance metadata and disclosure markers.
The problem
AI transparency regulations (California CAITA 2026, EU AI Act Article 50) require AI-generated legal content to carry proper attribution. Without verification:
- Content may lack required AI-generation disclosures
- Provenance metadata can be incomplete or tampered with
- Unauthorized models may generate legal documents without audit trails
The solution
from qwed_legal import ProvenanceGuard
guard = ProvenanceGuard(
require_disclosure=True,
require_human_review=False,
allowed_models=["gpt-4", "claude-3-opus"]
)
content = "This AI-generated document reviews the contract terms..."
provenance = {
"content_hash": "a1b2c3...", # SHA-256 of content
"model_id": "gpt-4",
"generation_timestamp": "2026-03-24T12:00:00+00:00",
}
result = guard.verify_provenance(content, provenance)
print(result["verified"]) # True or False
print(result["checks_passed"]) # ["metadata_completeness", "hash_integrity", ...]
print(result["checks_failed"]) # []
print(result["risk"]) # "" if verified, e.g. "CONTENT_TAMPERED" if not
Verification checks
ProvenanceGuard runs up to six checks. The first three always run; the last three are configurable.
| Check | Description | Always runs |
|---|
| Metadata completeness | content_hash, model_id, and generation_timestamp are present and non-empty | Yes |
| Hash integrity | SHA-256 of the content matches content_hash in provenance | Yes |
| Timestamp validity | ISO-8601 format, not in the future | Yes |
| Disclosure compliance | Content includes an AI-generation disclosure statement | If require_disclosure=True |
| Model allowlist | model_id is in the approved list | If allowed_models is set |
| Human review | human_reviewed is True in provenance | If require_human_review=True |
Constructor parameters
Require AI disclosure text in the content (e.g., “AI-generated”, “produced by AI”).
Require human_reviewed=True in provenance metadata.
allowed_models
list[str] | None
default:"None"
Allowlist of model IDs. None allows all models; an empty list denies all.
Generating provenance records
You can also use ProvenanceGuard to generate provenance metadata:
from qwed_legal import ProvenanceGuard
guard = ProvenanceGuard()
record = guard.generate_provenance(
content="This AI-generated contract summary...",
model_id="gpt-4",
disclosure_text="This document was generated by AI.",
human_reviewed=True,
reviewer_id="lawyer-42"
)
print(record.content_hash) # SHA-256 hash
print(record.generation_timestamp) # ISO-8601 UTC timestamp
print(record.human_reviewed) # True
ProvenanceRecord fields
| Field | Type | Description |
|---|
content_hash | str | SHA-256 hash of the AI-generated content |
model_id | str | Identifier of the model that generated the content |
generation_timestamp | str | ISO-8601 timestamp of generation |
disclosure_text | str | Human-readable AI disclosure statement |
human_reviewed | bool | Whether a human has reviewed the content |
reviewer_id | str | None | Identifier of the human reviewer |
Risk classifications
When verification fails, the risk field indicates the type of failure:
| Risk | Trigger |
|---|
CONTENT_TAMPERED | Hash mismatch between content and content_hash |
INCOMPLETE_PROVENANCE | Required metadata fields missing or empty |
MISSING_DISCLOSURE | No AI-generation disclosure found in content |
UNAUTHORIZED_MODEL | model_id not in the allowed models list |
UNREVIEWED_CONTENT | human_reviewed is not True |
INVALID_TIMESTAMP | Timestamp is malformed or in the future |
ProvenanceGuard is fully deterministic — no LLM calls required. All checks use SHA-256 hashing, regex pattern matching, and datetime validation.
SACProcessor (RAG Helper) 📄
Purpose: Prevent Document-Level Retrieval Mismatch (DRM) in legal RAG systems.
The Problem
Standard RAG chunking causes >95% retrieval mismatch in legal databases because:
- Legal documents share nearly identical boilerplate
- Chunk-level embeddings lose document context
- NDAs, contracts, and agreements look alike at the chunk level
The Solution
from qwed_legal import SACProcessor
sac = SACProcessor(llm_client=my_llm)
# Your existing chunks
chunks = naive_split(contract_text)
# Augment with document fingerprint
augmented = sac.generate_sac_chunks(
document_text=contract_text,
chunks=chunks,
document_id="NDA-2026-001"
)
# Each chunk now includes global context
print(augmented[0])
# DOCUMENT CONTEXT [NDA-2026-001]: NDA between Acme Corp and Beta Inc...
# CHUNK CONTENT [1/10]: Original chunk text here...
Configuration
| Parameter | Default | Description |
|---|
target_summary_length | 150 | Character limit for document fingerprint |
preview_chars | 5000 | Max chars sent to LLM for summarization |
Methods
| Method | Description |
|---|
generate_sac_chunks() | Augment all chunks with document fingerprint |
generate_fingerprint_only() | Get just the fingerprint for caching |
SACProcessor requires an LLM client. Generic (automated) summaries outperform expert-guided ones for retrieval.
All-in-One: LegalGuard
For convenience, use the unified LegalGuard class:
from qwed_legal import LegalGuard
# Optional: provide llm_client for FairnessGuard
guard = LegalGuard(
llm_client=my_llm,
provenance_config={
"require_disclosure": True,
"require_human_review": False,
"allowed_models": ["gpt-4", "claude-3-opus"],
}
)
# All 10 guards available
guard.verify_deadline(...)
guard.verify_liability_cap(...)
guard.check_clause_consistency(...) # ClauseGuard (text heuristics)
guard.verify_citation(...)
guard.verify_jurisdiction(...)
guard.verify_statute_of_limitations(...)
guard.verify_irac_structure(...) # v0.3.0
guard.verify_fairness(...) # v0.3.0 (requires llm_client)
guard.verify_contradiction(...) # v0.3.0 (Z3 SMT Solver)
guard.verify_provenance(content, provenance) # NEW in v0.4.0
Most guards are fully deterministic. Only verify_fairness() requires an LLM client.
Next Steps