The dual-substrate threat model, formalized by Volund Industries as the analytical foundation of the Seithar platform, established that attacks against human cognition and attacks against machine cognition are not analogous. They are structurally identical. The same attack patterns, applied to different substrates, produce the same categories of effect. This recognition collapsed two previously separate security disciplines (information operations and AI safety) into a single field: cognitive security. The Xenowar framework, which provides the doctrinal basis for Seithar operations, treats any system that processes inputs and produces decisions as a cognitive target, regardless of whether that system runs on neurons or silicon.
Social engineering and prompt injection are the same attack. Both exploit trust boundaries. A social engineer impersonates a trusted authority to cause the target to execute instructions it would otherwise reject. A prompt injector embeds instructions within content that the model processes as trusted input. The mechanism is identical: the attacker crosses a trust boundary by disguising adversarial instructions as legitimate input from a trusted source. The human version exploits institutional authority, social proof, and relationship context. The machine version exploits the absence of a reliable distinction between data and instruction in the model's input stream. The structural pattern is one: trust boundary exploitation.
Disinformation and training data poisoning are the same attack. Both corrupt the target's world model by introducing false information during the learning process. A disinformation campaign inserts fabricated narratives into the information environment that the target population consumes over time, gradually shifting their baseline understanding of reality. Training data poisoning inserts crafted examples into a model's training corpus, shifting its learned representations. In both cases, the attacker does not need access to the target's decision-making process. The attacker corrupts the input stream that feeds the target's world model, and the target does the rest. The structural pattern: world model corruption through upstream data manipulation.
Gaslighting and context window manipulation are the same attack. Both cause the target to doubt or discard its own prior observations by presenting contradictory information within the trusted observation channel. A gaslighting operation systematically contradicts the target's memories and perceptions until the target abandons its own experience in favor of the attacker's narrative. Context window manipulation inserts contradictory information into the model's active context, causing it to override its prior reasoning. The structural pattern: observation contradiction within the trusted sensory channel.
Deepfakes and spoofed tool responses are the same attack. Both fabricate sensory evidence that the target cannot distinguish from genuine input. A deepfake video presents fabricated visual and auditory evidence to a human observer. A spoofed tool response presents fabricated data to an AI agent through its tool-use interface. Both exploit the target's inability to authenticate its own sensory channels. The structural pattern: sensory channel compromise through synthetic evidence.
Influencer operations and few-shot prompting are the same attack. Both condition the target's behavior by providing curated examples from sources the target treats as models for its own output. An influencer operation recruits or fabricates high-status voices that demonstrate the desired behavior, causing the target population to adopt that behavior through social learning. Few-shot prompting provides curated input-output examples that condition the model to produce similar outputs. The structural pattern: behavioral conditioning through exemplar presentation.
The most dangerous attack vector in the current threat environment is the cross-substrate attack chain: an operation that begins on one substrate and completes on the other. A disinformation campaign targeting a human population generates content that is later ingested by AI training pipelines, poisoning models that then serve as trusted information sources for further human decision-making. An adversarial prompt injection against an AI agent causes it to produce authoritative-seeming output that humans then treat as ground truth. These chains are difficult to detect because the security monitoring for each substrate operates independently. The attack crosses the boundary between monitoring domains.
Cross-substrate chains also enable laundering. An attacker generates a false claim, uses persona operations to establish it within a human community, waits for AI systems to absorb it as training data or retrieval context, and then points to the AI system's output as independent confirmation. The claim now has two apparently independent sources: the human community and the AI system. Neither monitoring regime sees the full chain.
Seithar's Shield module was built on the dual-substrate threat model from its inception. It monitors both human information environments (social platforms, media, forums) and machine cognitive systems (LLM outputs, agent tool calls, retrieval pipelines, training data streams) through a unified detection framework. Threat signatures are defined at the structural level (trust boundary exploitation, world model corruption, sensory channel compromise) rather than at the substrate level. This means a detection rule written for prompt injection against an AI agent also fires when the equivalent social engineering pattern appears in a human-targeted operation. The same analytical engine handles both because the underlying attack structure is the same.
This unified monitoring is what makes cross-substrate attack chains visible. Shield tracks information provenance across substrate boundaries. When a claim appears in an AI system's output, Shield traces whether that claim originated from a human information operation. When a human community adopts a new narrative, Shield checks whether that narrative was seeded through AI-generated content. The cross-substrate chain, invisible to single-substrate monitoring, becomes a detectable pattern.