A Software Architecture for Deterministic Post-Generation Validation of LLMs

A Software Architecture for Deterministic Post-Generation Validation of LLMs

Martin Russmann
Abstract
Large Language Models (LLMs) produce linguistically plausible output by optimizing next-token prediction rather than factual correctness. This creates a structural reliability gap in operational settings where outputs must satisfy domain rules, numeric constraints, and external ground truth. Logic-Guard-Layer (LGL) is a software architecture designed to close that gap without modifying the underlying model. The architecture places a deterministic validation layer between model output and downstream use. It extracts structured claims from free-form text, validates them against formal constraints and authoritative data sources, classifies validation outcomes using a six-state decision model, and optionally triggers controlled repair while monitoring semantic drift. The central value of the architecture lies in its separation of concerns: probabilistic language generation remains isolated from rule enforcement, source interpretation, and audit logging. This document presents LGL as a reference architecture for enterprise and high-stakes LLM deployments, with emphasis on component boundaries, request flow, deployment patterns, operational safeguards, and acceptance criteria.
|
← Back to Articles
[ Translating... ]