QWED works with any LLM provider. The fastest way to get configured is the onboarding command:
This verifies your verification engines are operational, walks you through provider selection and secure key entry, and bootstraps a local API key — all in a single command. For CI/CD pipelines, pass --non-interactive with --provider and --api-key flags. See the CLI reference for the full walkthrough.
The rest of this page covers manual configuration and advanced options.
Understanding QWED’s architecture
QWED uses LLMs as untrusted translators, not as answer generators:
┌──────────────────────────────────────────────────────────────────┐
│ QWED VERIFICATION PIPELINE │
├──────────────────────────────────────────────────────────────────┤
│ │
│ User Query LLM (Translator) QWED (Verifier) Result │
│ │ │ │ │ │
│ ▼ ▼ ▼ ▼ │
│ "Is 2+2=5?" → "2+2=5" → [SymPy: 2+2=4] → ❌ CORRECTED: 4 │
│ │
│ Your LLM ↑ Our Engines Deterministic │
│ (any provider) Untrusted (Trusted) Guarantee │
│ │
└──────────────────────────────────────────────────────────────────┘
Key Insight: The LLM translates natural language to structured form. QWED then verifies the structured form using deterministic engines. The LLM can be wrong — QWED catches and corrects errors.
Supported LLM providers
| Provider | Env variable | Default model | qwed init support |
|---|
| NVIDIA NIM | CUSTOM_API_KEY + CUSTOM_BASE_URL | nvidia/nemotron-3-super-120b-a12b | Yes |
| OpenAI | OPENAI_API_KEY | gpt-4o-mini | Yes |
| Anthropic | ANTHROPIC_API_KEY | claude-sonnet-4-20250514 | Yes |
| Google Gemini | GOOGLE_API_KEY / GEMINI_API_KEY | gemini-1.5-pro | Yes |
| Ollama (local) | OLLAMA_BASE_URL | llama3 | Yes |
| OpenAI-compatible | CUSTOM_BASE_URL + CUSTOM_API_KEY | gpt-4o-mini | Yes |
| Azure OpenAI | AZURE_OPENAI_ENDPOINT + AZURE_OPENAI_API_KEY | Any Azure-hosted model | Manual |
| Claude Opus | CLAUDE_OPUS_API_KEY | claude-opus-4-5 | Manual |
| Custom (YAML) | Defined per provider | Defined per provider | Yes |
You can add any OpenAI-compatible provider (Groq, Together, Fireworks, and others) using a YAML configuration file. See Custom providers for details.
Configuration Options
Option 1: Use QWED’s Built-in Translation (Recommended)
QWED can handle LLM translation internally:
from qwed import QWEDClient
# QWED uses its own LLM for translation
client = QWEDClient(api_key="qwed_your_key")
result = client.verify("What is 15% of 200?")
# QWED internally: "15% of 200" → 0.15 * 200 → verify with SymPy → 30
Option 2: Bring Your Own LLM
Use QWED purely as a verification layer:
from qwed import QWEDClient
import openai
# Your LLM call
openai_client = openai.OpenAI(api_key="sk-...")
response = openai_client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "What is 847 × 23?"}]
)
llm_answer = response.choices[0].message.content
# QWED verification only (no LLM needed)
qwed = QWEDClient(api_key="qwed_your_key")
result = qwed.verify_math(
expression="847 * 23",
expected_result=llm_answer
)
if result.verified:
print(f"✅ LLM was correct: {llm_answer}")
else:
print(f"❌ LLM was wrong. Correct: {result.corrected}")
Option 3: Self-hosted with custom LLM
For self-hosted deployments, run qwed init or manually create a .env file:
# .env file — generated by `qwed init` or created manually
# WARNING: Contains secrets. NEVER commit this file.
# Active provider (openai, anthropic, ollama, openai_compat, gemini, azure_openai)
ACTIVE_PROVIDER=openai
# OpenAI
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4o-mini
# OR Anthropic
# ANTHROPIC_API_KEY=sk-ant-...
# ANTHROPIC_MODEL=claude-sonnet-4-20250514
# OR Ollama (no key needed)
# OLLAMA_BASE_URL=http://localhost:11434/v1
# OLLAMA_MODEL=llama3
# OR Google Gemini
# GOOGLE_API_KEY=your-google-api-key
# GEMINI_MODEL=gemini-1.5-pro
# OR OpenAI-compatible endpoint (Groq, Together, LM Studio, etc.)
# CUSTOM_BASE_URL=https://inference.do-ai.run/v1
# CUSTOM_API_KEY=your-key
# CUSTOM_MODEL=gpt-4o-mini
When ACTIVE_PROVIDER is not set, QWED defaults to Ollama as a safe local fallback. Set ACTIVE_PROVIDER explicitly if you want to use a cloud provider.
QWED loads .env files in a specific order: the project-level .env (in the current working directory) takes precedence, followed by the global ~/.qwed/.env. This ensures your project-specific configuration always overrides global defaults. If python-dotenv is not installed, .env loading is skipped gracefully with a warning.
Provider-specific setup
You can configure any of these providers interactively by running qwed init.
OpenAI
export ACTIVE_PROVIDER=openai
export OPENAI_API_KEY=sk-your-key-here
export OPENAI_MODEL=gpt-4o-mini
Key format: sk-... or sk-proj-... — get yours at platform.openai.com/api-keys.
Anthropic (Claude)
export ACTIVE_PROVIDER=anthropic
export ANTHROPIC_API_KEY=sk-ant-your-key-here
export ANTHROPIC_MODEL=claude-sonnet-4-20250514
Key format: sk-ant-... — get yours at console.anthropic.com/settings/keys.
Local LLMs (Ollama)
No API key needed. Install Ollama, pull a model, and configure QWED:
# Install and start Ollama
ollama serve
# Pull a model
ollama pull llama3
# Configure QWED
export ACTIVE_PROVIDER=ollama
export OLLAMA_BASE_URL=http://localhost:11434/v1
export OLLAMA_MODEL=llama3
OpenAI-compatible endpoint
For any service with an OpenAI-compatible API — DigitalOcean, Groq, Together AI, LM Studio, vLLM, and others:
export ACTIVE_PROVIDER=openai_compat
export CUSTOM_BASE_URL=https://inference.do-ai.run/v1
export CUSTOM_API_KEY=your-api-key
export CUSTOM_MODEL=gpt-4o-mini
Only CUSTOM_BASE_URL is required. If your endpoint does not require authentication (for example, a local vLLM or LM Studio server), you can omit CUSTOM_API_KEY entirely — QWED automatically supplies a placeholder token so the underlying client initializes correctly.
Google Gemini
Gemini uses a native GeminiProvider powered by the google-generativeai SDK. Install the dependency first:
pip install google-generativeai
Then configure your environment:
export ACTIVE_PROVIDER=gemini
export GOOGLE_API_KEY=your-google-api-key
export GEMINI_MODEL=gemini-1.5-pro
You can also use GEMINI_API_KEY instead of GOOGLE_API_KEY. Get yours at aistudio.google.com/app/apikey.
The Gemini provider supports math translation, logic verification, stats query generation, fact verification, and image claim verification. All API calls use a 30-second timeout and temperature=0.0 for deterministic output. If google-generativeai is not installed, QWED returns a structured ImportError instead of crashing.
Azure OpenAI
export ACTIVE_PROVIDER=azure_openai
export AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
export AZURE_OPENAI_API_KEY=your-azure-key
export AZURE_OPENAI_DEPLOYMENT=your-deployment-name
export AZURE_OPENAI_API_VERSION=2024-02-15-preview
Claude Opus
export ACTIVE_PROVIDER=claude_opus
export CLAUDE_OPUS_API_KEY=sk-ant-your-key
export CLAUDE_OPUS_DEPLOYMENT=claude-opus-4-5
Universal provider config (YAML)
You can define custom LLM providers using a YAML configuration file at ~/.qwed/providers.yaml. This is useful for managing multiple providers, custom endpoints, or community-contributed provider configs.
providers:
my-custom-llm:
base_url: "https://api.example.com/v1"
api_key_env: "MY_CUSTOM_API_KEY"
default_model: "my-model-v2"
models_endpoint: "/models"
auth_header: "Authorization"
auth_prefix: "Bearer"
| Field | Required | Default | Description |
|---|
base_url | Yes | — | The provider’s API base URL |
api_key_env | Yes | — | Environment variable name containing the API key |
default_model | No | gpt-4o-mini | Default model to use |
models_endpoint | No | /models | Endpoint for listing available models |
auth_header | No | Authorization | HTTP header name for authentication |
auth_prefix | No | Bearer | Prefix for the auth header value |
The YAML file is written with 0600 permissions (owner-only) for security. The ~/.qwed/ directory is created with 0700 permissions.
You can import provider configurations from a URL:
from qwed_sdk.cli import import_provider
# Import a community-contributed provider config
import_provider("https://example.com/my-provider.yaml")
Or use the CLI:
qwed provider import https://example.com/my-provider.yaml
Imported providers are validated and sandboxed — only the allowed fields listed above are saved. Provider slugs are sanitized and cannot shadow built-in providers.
Key validation
QWED validates API keys in two stages:
- Format check — Regex-based pattern matching (no network call). Built-in patterns include
sk-... for OpenAI and sk-ant-... for Anthropic.
- Connection test — Lightweight read-only request to the provider’s models endpoint to confirm the key works.
Run qwed init to validate your keys interactively, or test programmatically:
from qwed_sdk.cli import test_provider_connection
success, message = test_provider_connection("openai")
Provider routing
QWED automatically routes queries to the appropriate provider based on your configuration and query content.
Alias normalization
Provider names are normalized before routing, so common variations like openai-compatible, openai_compatible, and openai_compat all resolve to the same OpenAI-compatible provider. You do not need to worry about exact casing or separators when specifying a provider in API requests or environment variables.
Content-aware routing
When no preferred provider is specified in a request, QWED uses the configured default. For certain query types, QWED applies content-aware heuristics:
| Query type | Routed to |
|---|
Math/logic keywords (calculate, solve, equation, proof) | Your configured default provider |
Creative/writing keywords (write, compose, essay, story) | Anthropic (Claude) |
| All other queries | Your configured default provider |
You can always override routing by passing an explicit provider parameter in your API request.
Programmatic Configuration
from qwed import QWEDClient, LLMConfig
# Option 1: Use environment variables (recommended)
client = QWEDClient()
# Option 2: Explicit configuration
client = QWEDClient(
api_key="qwed_your_key",
llm_config=LLMConfig(
provider="openai",
api_key="sk-...",
model="gpt-4o",
temperature=0.0,
)
)
# Option 3: No LLM (verification only)
client = QWEDClient(
api_key="qwed_your_key",
llm_enabled=False # Only use deterministic verification
)
Translation vs Verification
Understanding the two phases:
| Phase | What Happens | Who Does It | Required? |
|---|
| Translation | Natural language → Structured form | LLM (any) | Optional |
| Verification | Structured form → Proof | QWED Engines | Required |
When You Need an LLM
client.verify("Is the derivative of x² equal to 2x?") — Needs LLM to parse
client.verify("Calculate compound interest on $1000 at 5%") — Needs LLM
When You Don’t Need an LLM
client.verify_math("diff(x**2, x) == 2*x") — Already structured
client.verify_logic("(AND (GT x 5) (LT x 10))") — Already in DSL
client.verify_sql("SELECT * FROM users") — Already structured
client.verify_code("import os; os.system('rm -rf /')") — Code, not NL
FAQ
Do I need an LLM to use QWED?
No. If you’re sending structured queries (math expressions, SQL, code, QWED-Logic DSL), you don’t need an LLM. QWED engines work directly on structured input.
Can I use my own LLM and just use QWED for verification?
Yes. This is the “Bring Your Own LLM” pattern. Call your LLM, then pass its output to QWED for verification.
Which LLM is best for QWED translation?
For translation accuracy, we recommend:
- GPT-4o (best)
- Claude 3 Opus
- Gemini Pro
- GPT-3.5-turbo (good for simple queries)
Is the LLM translation deterministic?
We set temperature=0 for reproducibility, but LLMs are inherently probabilistic. That’s why QWED verification is essential — it provides the determinism guarantee.
Next steps