Simple version
We found a tiny hidden change inside an AI model that made it answer some questions differently.
In plain terms
Two versions of the same model looked almost identical, but one behaved differently on a targeted set of prompts. We compared the model files, ran a frozen set of behavioural tests, and checked whether reversing only the suspicious change restored the original answers.
Technical artifact
A paired checkpoint tamper analysis using target-blind structural comparison of the eligible 2-D weights, pre-registered behavioural probes, and a one-tensor counterfactual revert, packaged as a Tier-1 signed bundle.