securitylinkindia

How is AI Transforming Threat Intelligence?

Meghna Aggarwal
Sciences Po Paris | International Security, Political Risk, Trade & Investments

Everyone has a plan until they get punched in the face. But what if, you could know exactly when the punch is coming, at what intensity, and where you should move to avoid the blow? That’s what threat intelligence is for organisations.

Threat intelligence refers to any information organisations use to better understand their adversaries. It provides context for companies to make proactive decisions about their physical security. And as a result, teams can better forecast and respond to incidents that have the potential to disrupt operations.

The pace of business is a cliché for good reason – speed matters. The ability to act quickly and decisively is now a true competitive advantage. It is no longer sufficient for security teams to play the ‘guard at the gate,’ simply waiting to stop bad actors. Instead, they must predict, prevent, and prepare for attacks before they even occur. Unlike traditional security measures that depend on static defences such as cameras and guards, threat intelligence draws on real-time information from surveillance systems, access logs, open-source intelligence (OSINT), social media, and incident reports. As a result, threat intelligence, though traditionally viewed as a security function, has become a powerful way for security to position itself as a key business differentiator within the organisation.

Nowadays, physical security teams work to defend a range of assets, from worksites and employees to infrastructure and intellectual property. Moreover, as companies expand internationally, they also must stay up to date on events around the world. Yet, with a shortage of qualified analysts, few have the time to comb through vast stacks of data.

Compounding this challenge is the sheer noise of social media, which can easily drown out critical signals and make it difficult to verify what is authentic. Genuine threats are often missed or deprioritised. During the flooding in Houston after Hurricane Harvey in 2017, for example, an image of a shark supposedly swimming down a submerged highway went viral, distracting the public from reliable updates about flood zones and emergency assistance.

As a result, by the time an incident occurs, it is often too late to react; and that can come with a big price tag, through fines, damages, and business disruptions for a company. This is where AI makes a decisive difference.

By enhancing nearly every stage of the threat intelligence lifecycle, AI is fundamentally reshaping how corporate security teams operate. The combination of AI and human expertise enables organisations to build effective human–machine partnerships that augment threat detection and sharpen decision-making. Here’s how:

Figure 1: Traditional Threat Intelligence Life Cycle

Traditionally, analysts were required to manually select and monitor a diverse range of news outlets and open sources, a labour-intensive, time-consuming effort just to filter out the noise and identify relevant incidents. Today, AI can continuously scan trusted, pre-verified sources at scale, automatically highlighting the events that matter most.

Through tools such as Application Programming Interfaces (APIs), raw data can be efficiently transformed to meet specific client requirements. For instance, data collected via Twitter’s APIs can be used to monitor key influencers and detect malicious activity by terror actors, ensuring rapid, real-time escalation of potential risks.

Thus, AI enables live intelligence updates while fundamentally reshaping the analyst’s role from painstaking data collection to the far more valuable task of validating and interpreting meaningful signals.

In this stage, analysts would typically trawl through multiple sources, interpret event details, place them in context, and then manually craft an operational alert or risk report. Now, AI can ingest large volumes of data and generate a preliminary alert or assessment within seconds.

Threat intelligence refers to any information organisations use to better understand their adversaries. It provides context for companies to make proactive decisions about their physical security. And as a result, teams can better forecast and respond to incidents that have the potential to disrupt operations

Here, AI also plays a critical role in countering disinformation and misinformation. Advanced, AI-driven systems can analyse patterns, language, and contextual cues to support content moderation, fact-checking, and the identification of false narratives, achieving accuracy rates of up to 97% when classifying news articles as genuine or misleading.

Once again, the analyst’s role shifts away from heavy processing towards focused verification, ensuring outputs are free from hallucinations, biases, or gaps in relevance. Crucially, AI systems also learn continuously from this human feedback, improving their precision over time and further reducing turnaround times.

This marks a shift from manual security work to intelligence-augmented decision-making, a new standard for speed, scale, and sophistication.

Traditionally, customising deliverables was both costly and time-consuming. Designing them required significant manual effort and dedicated resources, and even then, distribution was often limited due to fixed algorithms. With AI, however, the entire process is redefined.

Deliverables can now be tailored effortlessly using client-specific data, such as their role, industry, or company, while design becomes quick, scalable, and low-effort. Intelligent, AI-driven distribution can also adapt to individual client preferences, ensuring the right content reaches the right people at the right time. Additionally, AI can support multilingual threat reporting, translating intelligence into multiple languages for a truly global audience.

Perhaps most importantly, AI shifts the threat intelligence cycle from simply evaluating what’s happening now to actually predicting what might happen next. Rather than exclusively alerting an organisation to a current threat, AI anticipates how that threat may evolve, giving teams far greater foresight.

In the past, meaningful threat prediction was limited and highly selective. It required expensive models and vast computing resources, something only wealthy nations or military organisations could realistically afford. However, GenAI changes that equation. By working across massive datasets at scale and leveraging trend analysis, it makes advanced prediction widely accessible, delivering outputs that are specific, detailed, and focused.

Wondering whether the protests in Nepal might turn violent? By analysing historical data and patterns, AI can estimate the likelihood of escalation. Need to understand the knock-on effects of heavy rainfall coinciding with a labour union protest near your facilities? AI can model the potential impact on your specific assets.

This kind of predictive intelligence allows organisations to identify, prioritise, plan, and respond to emerging threats far more effectively, even before they become tomorrow’s crises.

Figure 2: AI Prediction for Nepal Protests (September 2025)

Figure 3: AI Prediction for Nepal Protests (September 2025)

Consequently, close AI-human teaming occurs at every stage of the intelligence cycle. AI handles the aggregation of vast volumes of raw data at scale, while analysts progressively curate and verify this information, refining it at each stage of the funnel. The result is faster, more precise, and more customisable intelligence, enabling the dissemination of a far greater volume of high-quality alerts than would previously have been possible.

Thus, the future of security is not about obsolescence or domination, it’s about collaboration. The best analysts are those who can combine tactical experience and situational judgement with the power of AI-driven analytics, automation, and foresight.

Figure 4: Human-Intelligence Teaming for Augmented Threat Intelligence

As someone preparing to enter the corporate security field, I often catch myself wondering: what does the rise of AI mean for young graduates? Are we heading toward a future where the jobs we’re aiming for disappear?

I don’t think so. Yes, AI brings incredible speed and scale, but there are still critical gaps that human beings must necessarily fill.

AI’s inability to think contextually makes human oversight essential. In threat intelligence, this means that humans must step in to interpret what an alert really means. Take a badge access anomaly, for example, is it a genuine threat or a contractor error? Here, an analyst must step in to assess the business impact and take a practical decision (evacuate, lockdown, monitor) rather than just rely on raw algorithmic output.

Moreover, while AI excels at detecting known patterns, it often struggles when the threat is novel or ambiguous. In such cases, human intuition becomes particularly crucial in spotting an unusual signal that may escalate into a serious operational risk. The 2022 Viasat cyberattack, which targeted not just military communications but also tens of thousands of civilian modems during Russia’s invasion of Ukraine, highlights how quickly new risks emerge. In an era of hybrid threats, where the boundaries between state conflict and commercial infrastructure are increasingly blurred, this form of human judgement is more important than ever for corporates.

Consequently, for graduates entering the security risk intelligence field, AI doesn’t spell obsolescence, but it does mean that certain human skills will become far more valuable than others. And those that cultivate them early, will make it ahead in the game.

Entry-level analysts will increasingly be required to possess a strong foundation in regional, security, and geopolitical dynamics, capabilities traditionally associated with more senior roles. This shift reflects a fundamental reality: the human analyst is the integrator, capable of synthesising context, motive, emotion, and consequence in ways even advanced AI cannot replicate. As security environments become more information-rich and AI-enabled, analysts will spend less time ‘doing the math’ and more time interpreting what the outputs mean in relation to mission objectives, ethical constraints, and social implications.

Compounding this challenge is the sheer noise of social media, which can easily drown out critical signals and make it difficult to verify what is authentic. Genuine threats are often missed or deprioritised. During the flooding in Houston after Hurricane Harvey in 2017, for example, an image of a shark supposedly swimming down a submerged highway went viral, distracting the public from reliable updates about flood zones and emergency assistance

Thus, when a potential threat surfaces, an analyst shouldn’t rely solely on technical indicators. They must be able to factor in the organisation’s operations, market conditions, ongoing business initiatives, local developments, and broader geopolitical tensions that may clarify the true nature and significance of the threat. This multi-dimensional analysis enables nuanced decision-making that balances security requirements with business – judgement that remains uniquely human.

Furthermore, while generative AI systems promise unprecedented speed, scale, and access to information, they also introduce a critical risk – the illusion of objectivity. Their ability to generate convincing text risks obscuring the fact that they do not understand what they say. They simulate authority, but cannot assure accuracy, especially in politically sensitive contexts. A recent study by the Carnegie Endowment illustrates this risk of bias. Researchers tested five major large language models developed in the US, Europe, and China across ten contentious international issues, including Hamas, NATO, Taiwan, and global trade. Rather than producing neutral assessments, the models reflected the ideological and national contexts in which they were trained – for example, framing protest movements as legitimate political expression in Western models, but as destabilising threats in Chinese ones.

In such conditions, maintaining a human in the loop is essential. Only an analyst can audit for hallucinations, cross-reference machine-generated intelligence with source material, interpret political nuance, and know when not to trust what looks authoritative.

Close AI-human teaming occurs at every stage of the intelligence cycle. AI handles the aggregation of vast volumes of raw data at scale, while analysts progressively curate and verify this information, refining it at each stage of the funnel. The result is faster, more precise, and more customisable intelligence, enabling the dissemination of a far greater volume of high-quality alerts than would previously have been possible

The security professional of tomorrow must be fluent not only in the language of threat and risk, but also in the language of data and algorithms. As AI becomes embedded across security functions, analytical advantage will depend on how effectively professionals can work with these systems, not simply whether they use them.

In this new paradigm, security professionals will be expected to:

  • Prompt AI systems effectively: Using generative models and analytical tools requires precision in language, goal-framing, and feedback, much like instructing a junior analyst, but with machine-specific syntax.
  • Understand data flows: Analysts must know how data is collected, filtered, and analysed. Understanding where intelligence originates, and what it excludes, is critical to ethical and effective decision-making.
  • Vet and verify outputs: AI-generated insights must be audited through human judgement. High-performing analysts will not be defined by blind trust in automation, but by their ability to challenge outputs, synthesise across systems, and recognise when ‘good enough’ is insufficient. In hybrid security roles, judgement becomes the defining skill.

As the Carnegie study demonstrates, AI systems encode political ideologies, shift narratives based on language inputs, and reproduce worldviews shaped by their national and institutional origins. These distortions carry real operational risks, from flawed briefings to misjudged threats and misinterpreted adversaries. To confront this, security professionals must learn not just how to use AI, but how to understand its limits, how to interrogate its logic, how its training data works, and what are its embedded assumptions.

Ultimately, AI should not be feared as a job killer but embraced as a core professional competency. Those who fail to adapt are unlikely to be replaced by machines themselves, but by professionals who know how to work with them effectively.

Finally, and perhaps most importantly, I find that the value of the ‘human touch’ is not lost on people. Analysts who can communicate clearly and confidently with C-suite executives about emerging threats, while accounting for their risk appetite, cultural sensitivities, and emotional context, are bound to stand out. In an extremely dynamic world, the ability to understand, evaluate and communicate the effects of complex risks in a clear, business-friendly manner is a truly lucrative skill.

So, ultimately, how is AI transforming threat intelligence?

Figure 5: The Rise of AI Agents

Put simply: AI systems are turning us from the poor soul who is often too slow to see the sucker punch coming into the one who politely sidesteps it and says, ‘Nice try.’ And for aspiring analysts, this new landscape isn’t a knockout blow. If anything, it just means, with AI, we get to spend less time counting punches and more time planning how to win the fight.


Read More

Leave a Reply

Your email address will not be published. Required fields are marked *