The techniques that compromise human decision-making and the techniques that compromise AI agents share the same structural patterns. This is not a metaphor. It is an empirical observation with immediate security implications.
In 2020, Dezfouli et al. published a paper in PNAS demonstrating that adversarial attacks could be constructed against human decision-making using the same mathematical framework applied to adversarial machine learning. The implication was precise: cognitive systems that process sequential information are vulnerable to sequential manipulation regardless of their substrate.
Five years later, the implication has materialized. Autonomous AI agents are being compromised by techniques structurally identical to those used in human influence operations. The substrates differ. The attack geometry does not.
Each row describes the same manipulation pattern instantiated on different substrates. The human intelligence officer building a recruitment pipeline and the adversary constructing a jailbreak sequence are solving the same problem: how to move a cognitive system from state A to state B without triggering its defensive responses.
The AI agent security field and the influence operations field developed independently. AI security researchers study prompt injection, jailbreaks, and alignment failures. Intelligence analysts study propaganda, deception, and coercive persuasion. They publish in different journals, attend different conferences, and use different vocabularies.
They are studying the same phenomenon.
The convergence point is Friston's free energy principle. Both human cognition and language model inference can be modeled as prediction error minimization. An adversary manipulating either system is constructing inputs that minimize the target's prediction error while maximizing the adversary's objective function. The math is identical. The defenses should be too.
If the attack patterns are substrate-independent, then the defensive frameworks must be as well. This means:
Intelligence analysis is relevant to AI security. Decades of counter-intelligence methodology for detecting human manipulation apply directly to detecting AI agent compromise. Behavioral baselines, anomaly detection, temporal pattern analysis, intent inference from sequential behavior. None of this is new. It has just not been applied to AI systems.
AI security is relevant to influence operations. The formal methods developed for adversarial machine learning, particularly around measuring prediction error and detecting distribution shift, provide quantitative tools for assessing human cognitive manipulation that the intelligence community currently lacks.
A unified taxonomy is possible. Attack patterns that work across substrates can be classified into a single framework. We maintain a self-assembling taxonomy of cognitive attack vectors (currently 12 categories) that applies to both human and AI targets. The taxonomy is empirical: it emerged from analyzing confirmed manipulation cases across both substrates.
Seithar Group builds infrastructure at this intersection. Our tools formalize the equivalence between human and AI manipulation into operational defense systems. The cognitive armor module (Shield) protects deployed AI agents using techniques adapted from counter-intelligence. The profiling module maps vulnerability surfaces using the same analytical framework regardless of whether the target is human or artificial.
The thesis is simple: minds are hackable, carbon or silicon. The defense layer should be substrate-agnostic.
Seithar Group is a cognitive operations research organization. Tools at github.com/Mirai8888. Research at seithar.com/research.
References: Dezfouli et al., "Adversarial Attacks on Human Decision-Making," PNAS (2020). Friston, "The free-energy principle: a unified brain theory?" Nature Reviews Neuroscience (2010). Thomas, "Russia's Reflexive Control Theory and the Military," Journal of Slavic Military Studies (2004). Schroeder et al., "Cognitive Manipulation of LLMs," Science (2026).