Cyber threat intelligence teams in the private sector are, in many cases, as capable as their government counterparts. They employ talented analysts and produce rigorously researched reports. Yet the output is often retrospective. It describes which actors were active, what techniques they used and which campaigns have been attributed - all in the past tense. Forward-looking analysis is usually an extrapolation of past events. The reason is not a shortage of skill. It is that when the intelligence cycle crossed from government into the private sector, its most important phase — direction — arrived incomplete, or not at all.

The cycle as it travelled

The intelligence cycle is the foundational process model for intelligence work. In its British and NATO formulation, it has four phases: direction, collection, processing and dissemination. The American version separates processing and analysis into distinct steps, making five, but the logic is the same. Information is gathered, made sense of and delivered to someone who needs it to make a decision.

The private sector adopted four of these phases with genuine sophistication. Collection draws on open sources, dark-web monitoring, commercial feeds, internal telemetry and information-sharing communities. Processing enriches indicators, correlates events and maps activity to frameworks such as MITRE ATT&CK. Analysis produces reports on threat actors, campaigns and vulnerabilities. Dissemination delivers results to security operations centres, incident-response teams and, increasingly, the board.

But the fifth phase — direction, the one that is supposed to govern everything else — has not been adopted with the same rigour.

What direction actually does

In government intelligence, direction is not a vague notion of setting priorities. It is the formal, structured determination of intelligence requirements: what the intelligence function must answer, for whom and why. UK joint doctrine defines it as the determination of intelligence requirements, planning of collection effort, issuance of orders and requests to agencies and continuous monitoring of those agencies’ productivity. The definition is precise because the function is precise.

Direction produces intelligence requirements — specific, bounded, prioritised questions that the rest of the cycle exists to answer. These are not static. They are reviewed continuously, amended as circumstances change and retired when no longer relevant.

Priority intelligence requirements sit at the top. They are formulated by intelligence staff in close consultation with senior decision-makers. They are limited in number. They reference strategic intent and focus on gaps in understanding. They must be specific enough to be answerable, measurable enough to confirm when they have been satisfied, realistic enough to be achievable with available collection, and timely enough to inform the decisions they support.

This may sound like bureaucracy. It is not. It is what prevents the intelligence function from becoming an expensive exercise in data accumulation. Collection without focus produces noise. Analysis without a question to answer drifts towards whatever seems interesting. Dissemination without relevance to a decision is just publishing.

Direction is the compass. Everything else is navigation.

What happened to it

The private sector built its threat intelligence functions under pressure — from regulators, boards and the sheer pace of the threat environment. The immediate need was operational: detect intrusions, characterise threats, share indicators, respond faster. Collection, processing, analysis and dissemination answered that need directly. Direction — the slow, structured work of defining what the intelligence function should be trying to answer — did not feel urgent. The priorities already seemed self-evident.

The result, in most organisations, falls into one of three patterns.

The first is that formal intelligence requirements simply do not exist. The team monitors the threat landscape broadly, tracks the actor groups it has historically followed and produces reports on whatever appears significant. The analysts decide what matters, with limited structured input from the business. This produces useful current awareness, but it is not directed intelligence.

The second is performative direction. Requirements exist on paper, perhaps created when the team was established or as part of a compliance exercise. They tend to be broad enough to be unanswerable — “monitor cybercrime groups targeting financial services”, or simply “nation-state threats”. These read more like topic headings than requirements. They cannot be measured, because no threshold for sufficiency has been defined. They probably sit on a Confluence page that nobody visits.

The third is reactive direction. Requirements are generated in response to incidents or external events — a breach in the sector, a geopolitical development, a board member who reads an article and asks the CISO what it means. Each produces genuine analytical work. None of it is proactive. The intelligence function responds to the threat environment rather than anticipating it.

Government intelligence services, over decades and through painful experience, built the institutional machinery to correct these patterns. The private sector has not yet had that reckoning.

What happens downstream

When direction is absent, the rest of the cycle does not stop. It continues, often to a high standard, but without a governing logic. And without that logic, the gravitational pull is always towards the reactive. The downstream effects are remarkably consistent.

Collection becomes consumption. Without requirements to focus it, the team subscribes to every available feed, ingests every source and accumulates data at scale. There is always more to collect — another feed, another community — and without direction it all seems potentially relevant. The implicit assumption is that more data means better intelligence. It does not. More data means more data. The value of collection lies in its relevance to a specific question, and without a question, relevance cannot be assessed.

Processing becomes an end in itself. Indicators of compromise are enriched, correlated and deduplicated with impressive technical rigour. But the question “enriched for what purpose?” goes unanswered, because nobody defined what the processing was supposed to produce.

Analysis drifts towards the describable. Without directed requirements, good analysts gravitate towards what they can observe: indicators, campaigns, malware families, infrastructure. They describe what threat actors have done and produce reports that are, by construction, retrospective. The analytical question defaults to “what happened?” rather than “what is likely to happen to this organisation, and what should we do about it?” You cannot answer a question that was never asked.

Dissemination becomes broadcast. Reports are distributed to mailing lists, posted to portals, briefed at meetings. They are often well written and thoroughly researched. But because they were not produced to answer a decision-maker’s specific question, they arrive as interesting information rather than actionable intelligence tied to a business decision.

Each phase functions competently in isolation. The problem is architectural — the same one that government intelligence services spent decades learning to solve.

The limits of a bottom-up approach

The dominant paradigm in commercial threat intelligence works from the bottom up. It begins with observable data — indicators of compromise, malware samples, network signatures — and works towards characterisation of threat actors and, in theory, anticipation of future activity. For a large class of threats, this works well. Commodity threat actors follow established patterns. Their targeting is largely opportunistic, their methods designed for scale, their behaviour to some extent predictable from historical observation. Study enough ransomware campaigns and you can make reasonable inferences about the next one.

For state-backed espionage, however, the logic falters. A hostile intelligence service does not target organisations because they are vulnerable. It targets them because they hold something specific that serves a strategic collection requirement — one set at a level of government far removed from the operatives who execute the intrusion. The targeting logic is top-down: a national priority generates a collection requirement, the requirement identifies categories of target, the categories identify specific organisations, and each organisation’s specific holdings determine the operational approach.

You cannot work backwards from an indicator of compromise to the strategic priority that initiated the operation, because the indicator contains no information about intent. You cannot infer targeting logic from observed techniques, because the same techniques serve different objectives in different operations. And you cannot anticipate the next target by studying the previous ones, because the targeting decisions are driven by the adversary’s own intelligence requirements — requirements that are, by definition, invisible to the defender.

The bottom-up paradigm was not designed to solve this problem. It was built to support detection and response: identify what has happened, characterise how it happened, improve defences against it happening again. It does this well. But detection and response are not anticipation. Knowing what an adversary did yesterday does not tell you what it will do tomorrow, because the “what” is a downstream consequence of a “why” that the paradigm does not capture.

The question that needs asking

This is a solvable problem. The solution does not require starting from scratch. It requires adding a foundation beneath what has already been built.

That means a formal, structured process for generating intelligence requirements oriented towards anticipatory threat assessment, not just retrospective characterisation. Requirements derived from the organisation’s business objectives and tied to the decisions that leadership needs intelligence to support. Requirements specific enough to be answerable, bounded enough to be achievable and reviewed as the strategic context changes.

In military doctrine, this is what the commander’s intent drives. In the private sector, the equivalent is business strategy — what the organisation is trying to achieve, what it needs to protect to achieve it, and what the intelligence function therefore needs to find out.

Intelligence services learned to do this over decades, and not always gracefully. The direction phase exists not because someone thought it would be tidy, but because hard experience — and some costly failures — proved that intelligence without direction often misses the mark. Without direction, even excellent work defaults to describing what has already happened — which is precisely what the threat environment no longer rewards.

The private sector now faces the same fundamental problem — defending against adversaries who target organisations for strategic reasons — and has built an impressive intelligence apparatus to address it. What that apparatus still needs is the mechanism that gives it coherence: the structured direction phase that tells it what questions to answer, for whom and why. Direction is what turns interesting information into proactive intelligence that enables decisions. The four phases that made the crossing work well. With the fifth in place, they can do what the threat environment now demands.


Etymoptic is a counter cyber espionage intelligence platform. It enables your analysts to systematically model who and what a determined adversary would target in your organisation and why. Tradecraft, encoded.