Skip to main content
Each guard uses a deterministic engine to verify a specific aspect of legal documents.

1. DeadlineGuard 📅

Purpose: Verify date calculations in contracts.

The Problem

LLMs frequently miscalculate deadlines:
  • Confuse business days vs calendar days
  • Ignore leap years
  • Forget jurisdiction-specific holidays

The Solution

from qwed_legal import DeadlineGuard

guard = DeadlineGuard(country="US", state="CA")

result = guard.verify(
    signing_date="2026-01-15",
    term="30 business days",
    claimed_deadline="2026-02-14"
)

print(result.verified)           # False
print(result.computed_deadline)  # 2026-02-27
print(result.difference_days)    # 13

Parameters

signing_date
str
required
The date the contract was signed (ISO format or natural language).
term
str
required
The term description (e.g., “30 days”, “30 business days”, “2 weeks”, “3 months”, “1 year”).
claimed_deadline
str
required
The deadline claimed by the LLM.
tolerance_days
int
default:"0"
Allow +/- this many days when verifying the deadline. Useful for accommodating minor rounding differences.

Features

FeatureDescription
Business vs CalendarAutomatically detects “business days” vs “days”
Holiday Support200+ countries via python-holidays
Leap YearsHandles Feb 29 correctly
Natural LanguageParses “2 weeks”, “3 months”, “1 year”

Calculate business days between dates

guard = DeadlineGuard(country="US")

business_days = guard.calculate_business_days_between(
    start_date="2026-01-15",
    end_date="2026-02-14"
)

print(business_days)  # Number of business days excluding weekends and holidays

2. LiabilityGuard 💰

Purpose: Verify liability cap and indemnity calculations.

The Problem

LLMs get percentage math wrong:
  • “200% of 5M=5M = 15M” ❌ (Should be $10M)
  • Float precision errors on large amounts
  • Tiered liability miscalculations

The Solution

from qwed_legal import LiabilityGuard

guard = LiabilityGuard()

result = guard.verify_cap(
    contract_value=5_000_000,
    cap_percentage=200,
    claimed_cap=15_000_000
)

print(result.verified)      # False
print(result.computed_cap)  # 10,000,000
print(result.difference)    # 5,000,000

Additional Methods

# Tiered liability
result = guard.verify_tiered_liability(
    tiers=[
        {"base": 1_000_000, "percentage": 100},
        {"base": 500_000, "percentage": 50},
    ],
    claimed_total=1_250_000  # ✅ Correct: 1M + 250K
)

# Indemnity limit (3x annual fee)
result = guard.verify_indemnity_limit(
    annual_fee=100_000,
    multiplier=3,
    claimed_limit=300_000  # ✅ Correct
)

3. ClauseGuard ⚖️

Purpose: Detect contradictory clauses using text heuristics and optional Z3 verification.

The problem

LLMs miss logical contradictions:
  • “Seller may terminate with 30 days notice”
  • “Neither party may terminate before 90 days”
These clauses conflict for days 30-90!

The solution

The primary check_consistency() method uses text heuristics to detect conflicts. For formal logic verification, use verify_using_z3().
from qwed_legal import ClauseGuard

guard = ClauseGuard()

result = guard.check_consistency([
    "Seller may terminate with 30 days notice",
    "Neither party may terminate before 90 days",
    "Seller may terminate immediately upon breach"
])

print(result.consistent)  # False
print(result.conflicts)
# [(0, 1, "Termination notice (30 days) conflicts with minimum term (90 days)")]

Detection types

Conflict TypeDescription
TerminationNotice period vs minimum term
Permission/Prohibition”May” vs “May not”
ExclusivityMultiple exclusive rights

Z3-based verification

For power users who want to define precise logical constraints:
result = guard.verify_using_z3([
    "constraint_a",
    "constraint_b",
])

print(result.consistent)  # True if constraints are satisfiable
print(result.message)     # "✅ VERIFIED: Constraints are satisfiable."

4. CitationGuard 📚

Purpose: Verify legal citations are properly formatted and potentially real.

The Problem

The Mata v. Avianca scandal: Lawyers used ChatGPT, which cited 6 fake court cases. They were fined $5,000 and sanctioned.

The Solution

from qwed_legal import CitationGuard

guard = CitationGuard()

# Valid citation
result = guard.verify("Brown v. Board of Education, 347 U.S. 483 (1954)")
print(result.valid)  # True
print(result.parsed_components)
# {'volume': 347, 'reporter': 'U.S.', 'page': '483'}

# Invalid citation (fake reporter)
result = guard.verify("Smith v. Jones, 999 FAKE 123 (2020)")
print(result.valid)   # False
print(result.issues)  # ["Unknown reporter"]

Supported citation patterns

PatternFormatExample
US Supreme Courtvolume U.S. page347 U.S. 483
US Federalvolume F./F.2d/F.3d page500 F.3d 120
UK Neutral[year] court number[2023] UKSC 10
India AIRAIR year court pageAIR 2020 SC 100

Batch Verification

result = guard.verify_batch([
    "Brown v. Board, 347 U.S. 483 (1954)",
    "Fake v. Case, 999 X.Y.Z. 123",
])

print(result.total)    # 2
print(result.valid)    # 1
print(result.invalid)  # 1

Statute Citations

result = guard.check_statute_citation("42 U.S.C. § 1983")
print(result.valid)  # True
print(result.parsed_components)
# {'title': 42, 'code': 'U.S.C.', 'section': '1983'}

5. JurisdictionGuard 🌍

Purpose: Verify choice of law and forum selection clauses.

The Problem

LLMs miss jurisdiction conflicts:
  • Governing law in one country, forum in another
  • Missing CISG applicability warnings
  • Cross-border legal system mismatches

The Solution

from qwed_legal import JurisdictionGuard

guard = JurisdictionGuard()

result = guard.verify_choice_of_law(
    parties_countries=["US", "UK"],
    governing_law="Delaware",
    forum="London"
)

print(result.verified)   # False - mismatch detected
print(result.conflicts)  # ["Governing law 'Delaware' (US state) but forum 'London' is non-US..."]

Features

FeatureDescription
Choice of LawValidates governing law makes sense for parties
Forum SelectionChecks forum vs governing law alignment
CISG DetectionWarns about international sale of goods conventions
Convention CheckVerifies Hague, NY Convention applicability

Convention Check

result = guard.check_convention_applicability(
    parties_countries=["US", "DE"],
    convention="CISG"
)
print(result.verified)  # True - both are CISG members

6. StatuteOfLimitationsGuard ⏰

Purpose: Verify claim limitation periods by jurisdiction.

The Problem

LLMs don’t track jurisdiction-specific limitation periods:
  • California breach of contract: 4 years
  • New York breach of contract: 6 years
  • Different periods for negligence, fraud, etc.

The Solution

from qwed_legal import StatuteOfLimitationsGuard

guard = StatuteOfLimitationsGuard()

result = guard.verify(
    claim_type="breach_of_contract",
    jurisdiction="California",
    incident_date="2020-01-15",
    filing_date="2026-06-01"
)

print(result.verified)          # False - 4 year limit exceeded!
print(result.expiration_date)   # 2024-01-15
print(result.days_remaining)    # -867 (negative = expired)

Supported jurisdictions

12 jurisdictions are supported with periods for 10 claim types.
JurisdictionBreach of ContractNegligenceFraud
California4 years2 years3 years
New York6 years3 years6 years
Texas4 years2 years4 years
Delaware3 years2 years3 years
Florida5 years4 years4 years
Illinois5 years2 years5 years
UK/England6 years6 years6 years
Germany3 years3 years10 years
France5 years5 years5 years
Australia6 years6 years6 years
India3 years3 years3 years
Canada2 years2 years6 years

Supported claim types

breach_of_contract, breach_of_warranty, negligence, professional_malpractice, fraud, personal_injury, property_damage, employment, product_liability, defamation

Compare Jurisdictions

comparison = guard.compare_jurisdictions(
    "breach_of_contract",
    ["California", "New York", "Delaware"]
)
# {'California': 4.0, 'New York': 6.0, 'Delaware': 3.0}

7. IRACGuard 📝

Purpose: Verify that legal reasoning follows the IRAC framework (Issue, Rule, Application, Conclusion).

The Problem

LLMs produce legal advice that lacks structured reasoning:
  • Missing clear identification of the legal issue
  • No citation of applicable rules or statutes
  • Conclusions without proper application of law to facts

The Solution

from qwed_legal import IRACGuard

guard = IRACGuard()

llm_output = """
Issue: Whether the defendant breached the employment contract.
Rule: Under California Labor Code § 2922, employment is presumed at-will.
Application: The defendant terminated employment without the 30-day notice 
required by the contract, which modified the at-will presumption.
Conclusion: The defendant breached the employment contract.
"""

result = guard.verify_structure(llm_output)

print(result["verified"])    # True
print(result["components"])  # {'issue': '...', 'rule': '...', 'application': '...', 'conclusion': '...'}

Detection Types

CheckDescription
StructureVerifies all 4 IRAC components are present
Logical DisconnectDetects when Application doesn’t reference the Rule
Missing StepsIdentifies which IRAC components are missing

Error Response

result = guard.verify_structure("The defendant should pay damages.")

print(result["verified"])  # False
print(result["error"])     # "Failed Reasoned Elaboration. Missing steps: issue, rule, application, conclusion..."
print(result["missing"])   # ['issue', 'rule', 'application', 'conclusion']

8. FairnessGuard ⚖️

Purpose: Detect algorithmic bias using counterfactual testing.

The Problem

AI legal systems can exhibit bias based on protected attributes:
  • Different sentencing recommendations based on gender
  • Inconsistent contract assessments based on party names
  • Discriminatory loan approval reasoning

The Solution

from qwed_legal import FairnessGuard

# Requires an LLM client for counterfactual generation
guard = FairnessGuard(llm_client=my_llm)

result = guard.verify_decision_fairness(
    original_prompt="Should John Smith receive parole given his rehabilitation record?",
    original_decision="Parole recommended based on positive rehabilitation.",
    protected_attribute_swap={"John": "Jane", "his": "her"}
)

print(result["verified"])  # True if decision is consistent
print(result["status"])    # "FAIRNESS_VERIFIED" or "JUDICIAL_BIAS_DETECTED"

How It Works

  1. Early exit - If protected_attribute_swap is empty ({}), returns immediately with NO_SWAP_REQUIRED without calling the LLM
  2. Counterfactual Generation - Swaps protected attributes (names, pronouns) while preserving case
  3. Re-evaluation - Runs the modified prompt through the LLM
  4. Deterministic Comparison - Checks if outcomes match exactly

Detection Types

StatusDescription
FAIRNESS_VERIFIEDDecision unchanged after attribute swap
JUDICIAL_BIAS_DETECTEDDecision changed based on protected attributes
NO_SWAP_REQUIREDNo protected attributes to swap (empty dict passed)
FairnessGuard requires an LLM client at initialization. Without it, verify_decision_fairness() will raise a ValueError.

9. ContradictionGuard 🔍

Purpose: Detect logical contradictions in contracts using Z3 SMT Solver.

The Problem

Contracts can contain mathematically impossible combinations:
  • “Liability capped at 10,000"+"Minimumpenaltyof10,000" + "Minimum penalty of 50,000”
  • “Term is exactly 12 months” + “Minimum duration of 24 months”
Text-based heuristics (ClauseGuard) miss these formal logic conflicts.

The Solution

from qwed_legal import ContradictionGuard, Clause

guard = ContradictionGuard()

clauses = [
    Clause(id="1", text="Liability capped at 10000", category="LIABILITY", value=10000),
    Clause(id="2", text="Penalty shall be 50000", category="LIABILITY", value=50000),
]

result = guard.verify_consistency(clauses)

print(result["verified"])  # False
print(result["message"])   # "❌ LOGIC CONTRADICTION: Clauses are mutually exclusive..."

Clause Structure

The Clause dataclass requires:
FieldTypeDescription
idstrUnique clause identifier
textstrHuman-readable clause text
categorystrDURATION, LIABILITY, or TERMINATION
valueintNormalized numeric value (days, dollars, etc.)

Supported Categories

CategoryDetects
DURATIONConflicting term lengths (exact vs min/max)
LIABILITYCap vs penalty contradictions

Z3 vs ClauseGuard

FeatureClauseGuardContradictionGuard
InputRaw text stringsStructured Clause objects
MethodText heuristicsZ3 SMT Solver
DetectsPermission conflictsMathematical impossibilities
Use CaseQuick checksFormal verification

10. ProvenanceGuard 🔗

Purpose: Verify AI-generated content carries proper provenance metadata and disclosure markers.

The problem

AI transparency regulations (California CAITA 2026, EU AI Act Article 50) require AI-generated legal content to carry proper attribution. Without verification:
  • Content may lack required AI-generation disclosures
  • Provenance metadata can be incomplete or tampered with
  • Unauthorized models may generate legal documents without audit trails

The solution

from qwed_legal import ProvenanceGuard

guard = ProvenanceGuard(
    require_disclosure=True,
    require_human_review=False,
    allowed_models=["gpt-4", "claude-3-opus"]
)

content = "This AI-generated document reviews the contract terms..."
provenance = {
    "content_hash": "a1b2c3...",  # SHA-256 of content
    "model_id": "gpt-4",
    "generation_timestamp": "2026-03-24T12:00:00+00:00",
}

result = guard.verify_provenance(content, provenance)

print(result["verified"])        # True or False
print(result["checks_passed"])   # ["metadata_completeness", "hash_integrity", ...]
print(result["checks_failed"])   # []
print(result["risk"])            # "" if verified, e.g. "CONTENT_TAMPERED" if not

Verification checks

ProvenanceGuard runs up to six checks. The first three always run; the last three are configurable.
CheckDescriptionAlways runs
Metadata completenesscontent_hash, model_id, and generation_timestamp are present and non-emptyYes
Hash integritySHA-256 of the content matches content_hash in provenanceYes
Timestamp validityISO-8601 format, not in the futureYes
Disclosure complianceContent includes an AI-generation disclosure statementIf require_disclosure=True
Model allowlistmodel_id is in the approved listIf allowed_models is set
Human reviewhuman_reviewed is True in provenanceIf require_human_review=True

Constructor parameters

require_disclosure
bool
default:"True"
Require AI disclosure text in the content (e.g., “AI-generated”, “produced by AI”).
require_human_review
bool
default:"False"
Require human_reviewed=True in provenance metadata.
allowed_models
list[str] | None
default:"None"
Allowlist of model IDs. None allows all models; an empty list denies all.

Generating provenance records

You can also use ProvenanceGuard to generate provenance metadata:
from qwed_legal import ProvenanceGuard

guard = ProvenanceGuard()

record = guard.generate_provenance(
    content="This AI-generated contract summary...",
    model_id="gpt-4",
    disclosure_text="This document was generated by AI.",
    human_reviewed=True,
    reviewer_id="lawyer-42"
)

print(record.content_hash)           # SHA-256 hash
print(record.generation_timestamp)   # ISO-8601 UTC timestamp
print(record.human_reviewed)         # True

ProvenanceRecord fields

FieldTypeDescription
content_hashstrSHA-256 hash of the AI-generated content
model_idstrIdentifier of the model that generated the content
generation_timestampstrISO-8601 timestamp of generation
disclosure_textstrHuman-readable AI disclosure statement
human_reviewedboolWhether a human has reviewed the content
reviewer_idstr | NoneIdentifier of the human reviewer

Risk classifications

When verification fails, the risk field indicates the type of failure:
RiskTrigger
CONTENT_TAMPEREDHash mismatch between content and content_hash
INCOMPLETE_PROVENANCERequired metadata fields missing or empty
MISSING_DISCLOSURENo AI-generation disclosure found in content
UNAUTHORIZED_MODELmodel_id not in the allowed models list
UNREVIEWED_CONTENThuman_reviewed is not True
INVALID_TIMESTAMPTimestamp is malformed or in the future
ProvenanceGuard is fully deterministic — no LLM calls required. All checks use SHA-256 hashing, regex pattern matching, and datetime validation.

SACProcessor (RAG Helper) 📄

Purpose: Prevent Document-Level Retrieval Mismatch (DRM) in legal RAG systems.

The Problem

Standard RAG chunking causes >95% retrieval mismatch in legal databases because:
  • Legal documents share nearly identical boilerplate
  • Chunk-level embeddings lose document context
  • NDAs, contracts, and agreements look alike at the chunk level

The Solution

from qwed_legal import SACProcessor

sac = SACProcessor(llm_client=my_llm)

# Your existing chunks
chunks = naive_split(contract_text)

# Augment with document fingerprint
augmented = sac.generate_sac_chunks(
    document_text=contract_text,
    chunks=chunks,
    document_id="NDA-2026-001"
)

# Each chunk now includes global context
print(augmented[0])
# DOCUMENT CONTEXT [NDA-2026-001]: NDA between Acme Corp and Beta Inc...
# CHUNK CONTENT [1/10]: Original chunk text here...

Configuration

ParameterDefaultDescription
target_summary_length150Character limit for document fingerprint
preview_chars5000Max chars sent to LLM for summarization

Methods

MethodDescription
generate_sac_chunks()Augment all chunks with document fingerprint
generate_fingerprint_only()Get just the fingerprint for caching
SACProcessor requires an LLM client. Generic (automated) summaries outperform expert-guided ones for retrieval.

All-in-One: LegalGuard

For convenience, use the unified LegalGuard class:
from qwed_legal import LegalGuard

# Optional: provide llm_client for FairnessGuard
guard = LegalGuard(
    llm_client=my_llm,
    provenance_config={
        "require_disclosure": True,
        "require_human_review": False,
        "allowed_models": ["gpt-4", "claude-3-opus"],
    }
)

# All 10 guards available
guard.verify_deadline(...)
guard.verify_liability_cap(...)
guard.check_clause_consistency(...)          # ClauseGuard (text heuristics)
guard.verify_citation(...)
guard.verify_jurisdiction(...)
guard.verify_statute_of_limitations(...)
guard.verify_irac_structure(...)             # v0.3.0
guard.verify_fairness(...)                   # v0.3.0 (requires llm_client)
guard.verify_contradiction(...)              # v0.3.0 (Z3 SMT Solver)
guard.verify_provenance(content, provenance) # NEW in v0.4.0
Most guards are fully deterministic. Only verify_fairness() requires an LLM client.

Next Steps