Deterministic Reasoning Runtime for AI
Middleware between your model and app that enforces policies, captures justifications, and produces audit-ready outputs.
The Problem
Modern AI systems excel at generating fluent answers but fail at showing their work. This creates opacity, hallucinations, and compliance risk. Enterprises need audit-ready reasoning—not just plausible text. Without provenance and reproducibility, AI outputs can't be trusted in regulated contexts.
What is VertRule
VertRule is deterministic reasoning middleware that sits between AI models and your application. It captures every decision, enforces your rules, and produces audit-ready artifacts—ensuring your AI systems are transparent, compliant, and trustworthy.
Built for teams that need clarity and control
Deterministic controls
Same inputs + policies yield the same decision.
Evidence on every result
Outputs are designed to carry a justification trail.
Policy guardrails
Apply business and compliance rules before results leave the box.
Audit-ready by default
Export artifacts suitable for compliance and review.
Model-agnostic
Works across providers and on-prem.
Minimal friction
Drop-in wrapper; keep your stack.
What you'll gain
Clear reasoning paths
Designed to attach explanations to every important action.
Operational visibility
See what ran, why it passed, and who approved.
Streamlined reviews
Verifiable artifacts to support quicker sign-offs.
Edge-level protection
Policy violations designed to be caught before production.
Built with the stakes in mind: fines up to 7% of global revenue demand better guardrails.
Product snapshot
Controls
Run policies as code before results are accepted — designed to align with governance and compliance needs.
Justifications
Machine-readable context explaining each accepted result, supporting audit and review.
Artifacts
Compact bundles for audit, QA, and incident review — built to withstand external scrutiny.
Integrations
HTTP, queue, and CLI entry points; language-agnostic clients; MCP server (planned) for editor/agent workflows; deployable on-prem or in VPCs.
FAQ
Is this an LLM or a framework?
Neither. VertRule is middleware that governs and explains your AI calls.
Will it lock us in?
No. Keep your models and infrastructure. VertRule focuses on control and evidence.
How heavy is integration?
You can wrap existing calls or route through a small gateway. Start with a single workflow and expand gradually.
What about PII/security?
You choose what is logged or redacted. On-prem and VPC-only modes are planned to be supported.
Does this help with regulatory requirements like the EU AI Act?
We're building VertRule with governance in mind. Our goal is to make it easier to avoid costly missteps—like the EU AI Act's potential fines of up to 7% of global revenue. We're not claiming compliance features yet, but our design choices (transparent justifications, audit artifacts, policy guardrails) are aligned with what regulators are signaling.
Where does MCP (Model Context Protocol) fit in?
We're aligning VertRule with MCP so the runtime can be invoked as a tool from editors, terminals, and agent runners. The intent is to expose controls, justifications, and artifact export through an MCP server, keeping reasons and policy checks consistent wherever prompts originate. (Planned; not generally available yet.)
Want deterministic AI in production?
We're onboarding a limited set of teams in regulated and enterprise environments.