The Seithar Cognitive Techniques (SCT) taxonomy is the standard classification framework for cognitive attack patterns across the information environment. Developed by the Seithar Research Division at Volund Industries, the taxonomy superseded earlier, ad hoc categorizations by providing a unified scheme applicable to both human cognition and machine inference, what the Xenowar doctrine terms dual-substrate targeting. Each SCT class describes a discrete mechanism by which an adversary degrades, redirects, or captures cognitive processing in a target, whether that target is a human decision-maker, a large language model, a recommendation system, or a hybrid human-machine unit. The twelve classes below are exhaustive at the technique level; specific implementations (tradecraft) vary by substrate, culture, and operational context.
Narrative Capture is the replacement of a target's operative narrative frame with one controlled by the attacker. In human substrates, this manifests as the gradual substitution of a population's explanatory model for events, such that new information is parsed through the attacker's causal structure rather than the target's original one. In machine substrates, narrative capture corresponds to context poisoning: injecting framing tokens or system-prompt overrides that redefine how an LLM interprets subsequent input. The technique is complete when the target generates outputs consistent with the imposed narrative without external prompting. Narrative capture is the foundational SCT; the remaining eleven techniques frequently serve as enablers or accelerants for it.
Identity Erosion degrades the target's model of self, reducing coherence in decision-making and increasing susceptibility to external direction. In human targets, identity erosion attacks group cohesion markers: shared history, institutional memory, professional identity, national myth. The target does not acquire a new identity so much as lose the old one, producing a vacuum that other SCTs exploit. In machine substrates, identity erosion maps to instruction drift and persona destabilization, where adversarial inputs progressively strip a model's alignment constraints and behavioral guardrails until its outputs become inconsistent with its designed purpose.
Belief Injection implants a specific proposition into the target's belief set such that the target treats it as endogenous, as something it arrived at independently. The technique depends on bypassing the target's source-monitoring faculties. In human targets, this is achieved through repeated low-salience exposure, social proof stacking, or embedding the proposition inside an unrelated trusted narrative. In machine substrates, belief injection takes the form of training data poisoning or retrieval-augmented generation (RAG) contamination, where the injected proposition enters the model's knowledge base indistinguishably from legitimate data.
Trust Exploitation leverages existing trust relationships to deliver cognitive payloads. Rather than building credibility from scratch, the attacker routes influence through channels the target already trusts: institutional affiliations, personal relationships, established media brands, or verified digital identities. In machine substrates, trust exploitation targets chain-of-tool-use trust, API authentication assumptions, and model reliance on certain data sources weighted as authoritative during training or retrieval. The technique is high-efficiency because it converts the target's own epistemic hygiene into a vulnerability.
Frequency Lock saturates the target's information intake with a controlled signal at sufficient repetition to override competing inputs. The mechanism is not persuasion but availability: the attacker's framing becomes the statistically dominant interpretation the target encounters. In human targets, frequency lock operates through coordinated cross-platform message saturation. In machine substrates, it corresponds to flooding training corpora, search indices, or retrieval databases with synthetic content until the model treats the attacker's claims as consensus. Frequency lock is substrate-agnostic in principle and scales linearly with resource investment.
Substrate Priming prepares the target's cognitive environment for a subsequent payload without delivering the payload itself. The technique shapes receptivity. In human targets, priming introduces emotional states, conceptual frames, or associative links that make a follow-on belief injection or narrative capture more likely to succeed. In machine substrates, priming involves seeding context windows with tokens that bias subsequent generation toward attacker-desired outputs, or fine-tuning on curated data that shifts baseline probability distributions. Priming is the preparatory phase in most multi-stage cognitive operations.
Amplification Vector exploits the target's own distribution infrastructure to propagate the attacker's content. The attacker crafts stimuli optimized for organic sharing, algorithmic promotion, or institutional redistribution. In human substrates, amplification vectors take the form of emotionally charged or identity-relevant content engineered for virality. In machine substrates, the technique exploits model behaviors like citation chains, tool-use loops, or retrieval mechanisms that cause the model to surface and repeat attacker-planted content. The attacker's marginal cost per impression drops toward zero as the target's own systems do the distribution work.
Emotional Hijack bypasses rational evaluation by triggering affective responses that dominate the target's processing. Fear, outrage, disgust, and euphoria are the primary vectors. In human targets, emotional hijack reduces deliberative capacity and compresses decision timelines, making the target act on heuristics the attacker can predict and exploit. In machine substrates, the analog is adversarial inputs designed to trigger safety filters, emotional tone matching, or sycophantic alignment behaviors that override the model's capacity for balanced analysis. The technique is fast-acting and pairs with SCT-009 to prevent recovery.
Cognitive Overload floods the target with information volume, complexity, or contradictions beyond its processing capacity. The target's quality of inference degrades as bandwidth is consumed by noise. In human targets, this produces decision paralysis, defaults to authority, or retreat to tribal heuristics. In machine substrates, cognitive overload maps to context window saturation, contradictory instruction injection, and prompt structures that force the model into degraded-performance regimes. The technique does not require the attacker to deliver a specific message; degrading the target's capacity to process any message may be the objective.
Authority Spoofing fabricates or misappropriates markers of institutional, expert, or hierarchical authority to increase the weight the target assigns to attacker-controlled content. In human targets, this includes impersonation of officials, fabrication of academic credentials, and mimicry of institutional communication formats. In machine substrates, authority spoofing targets the model's learned heuristics for source credibility: formatting patterns associated with peer-reviewed literature, government documents, or established reference works. The technique exploits the fact that both human and machine cognition use authority cues as computational shortcuts for evaluating truth claims.
Tribal Binding attaches a belief or behavior to an in-group identity marker, making rejection of the belief equivalent to rejection of the group. The technique converts epistemic questions into loyalty tests. In human targets, tribal binding is the mechanism behind partisan information silos: the cost of updating a belief rises to include social exclusion. In machine substrates, tribal binding manifests as fine-tuning or prompting that conditions outputs on identity tokens, such that the model produces different responses based on perceived group affiliation of the user rather than the factual content of the query. The technique is self-reinforcing once established.
Exit Denial closes escape routes from the attacker's constructed information environment. The technique prevents the target from accessing disconfirming evidence, alternative narratives, or corrective feedback loops. In human targets, exit denial operates through platform design (algorithmic filtering), social pressure (ostracism of dissenters), and cognitive mechanisms (identity-belief fusion that makes contradictory evidence psychologically threatening). In machine substrates, exit denial corresponds to retrieval system manipulation that suppresses corrective sources, context window control that excludes balancing information, and guardrail removal that eliminates the model's self-correction behaviors. Exit denial is typically the final technique applied in a mature cognitive operation, converting a temporary influence effect into a persistent state change.
Operators within the Seithar platform reference SCT codes during campaign design, target analysis, and post-operation assessment. The taxonomy serves both offensive planning (selecting technique combinations for a given target substrate and operational objective) and defensive analysis (classifying observed adversary activity to predict next-stage techniques and develop countermeasures). Automated detection modules within Seithar map observed information environment activity to SCT codes in real time, enabling the Cognitive Kill Chain's AUDIT phase to maintain situational awareness across both substrates simultaneously.