ChannelLife Canada - Industry insider news for technology resellers
Digital code shield glowing protection shadowy figure piercing defense vulnerability

AI in cyber security: A double-edged sword

Fri, 3rd Oct 2025

Artificial intelligence is reshaping industries, and cyber security is no exception. But how exactly is it being used, and what risks does it introduce? As we approach Cyber Security Awareness Month, it's the perfect time to assess the current state and future of AI within this critical field. While AI offers powerful tools to enhance defence mechanisms, it also introduces sophisticated new threats, and for Canadian organizations and individuals, this duality demands increased vigilance and a deeper understanding of the evolving digital landscape.

The rising tide of AI-driven threats

According to Canada's National Cyber Threat Assessment, the number one trend impacting Canada's cyber threat landscape is the way AI technologies are amplifying cyber space threats. Since the GenAI boom in 2021, publicly reported worldwide generative AI incidents have skyrocketed and are projected to continue to rise.

One of the most significant impacts of AI has been on the erosion of digital identity. Social engineering, a tactic where threat actors exploit social connections and manipulate or deceive users to act against their organization's best interests, has long been a powerful method for breaching security defences. With the rise of AI, this approach has become even more sophisticated and effective. Large language models (LLMs) and generative AI now enable unscripted, real-time interactions through text and audio, with synthetic video capabilities advancing quickly. These technologies dramatically increase the scalability and believability of impersonation attacks.

Deepfake audio and video, once requiring significant resources, are becoming commoditized. This lowers the barrier to entry for malicious actors, allowing for more personalized and convincing campaigns. As the quality of generative media improves, traditional identity verification methods like voice recognition and facial analysis are becoming less reliable. Digital trust frameworks must be fortified to withstand this new era of AI-enabled deception.

Cyber criminals are also using AI to streamline the creation of malicious software. AI tools can be used to generate ransomware scripts, phishing kits, and information-stealing malware, enabling even low-skilled actors to launch sophisticated attacks. While fully AI-generated malware is still maturing, we already see AI being used to refine malicious code and mine stolen data more efficiently, organizing large credential datasets for resale on the dark web.

AI integration and enterprise risk

With the growing integration of AI technologies in corporate settings, organizations face increased risks. Many daily interactions with technology, whether direct use of generative AI or indirect use of AI-powered grammar tools and translation assistants, can lead to inadvertent sharing of sensitive information. This includes strategic plans, financial data, customer information, and intellectual property.

Check Point Research's recent AI Security Report found that more than half (51%) of enterprise networks use AI services. OpenAI's ChatGPT is the most widely used service, present in 37% of networks, followed by Microsoft's Copilot at 27%. Other popular tools include writing assistants like Grammarly and translation services like DeepL. With this widespread adoption, the potential for data leakage is substantial. In fact, one in 80 prompts (1.25%) sent from enterprise devices to GenAI services posed a high risk of sensitive data leakage, while 7.5% of prompts (one in 13) contained potentially sensitive information. As the use of AI tools increases, these numbers are likely to rise.

Fighting fire with fire: AI in defence

Despite the risks, AI is also a powerful ally in cyber security. It is improving threat detection, simplifying complex systems, and accelerating incident response by automating tasks and enhancing analytical capabilities. For instance, LLMs can automate data collection, testing and preliminary analysis, freeing human experts to focus on adversary profiling and strategic decision-making.

This automation is particularly valuable in advanced threat hunting and malware analysis. In fact, the concept of fully automating malware analysis is now a reality. Last year, researchers successfully used LLMs to decompile malware code, identifying malicious behaviour with impressive accuracy even when traditional detection methods fail.

The dual-edged sword of AI in cyber security

AI is transforming the cyber security landscape, both in how we defend against attacks and how attacks are carried out, creating a new and evolving environment for organizations.

On one hand, the rise of AI introduces serious new risks that must be addressed head-on. Cyber criminals are leveraging AI to optimize their attacks and scale their operations, presenting unprecedented challenges for individuals and businesses alike. On the other hand, AI is transforming threat detection, empowering human cyber security teams to stay ahead of emerging threats.

In this new age of AI, we must learn to navigate its dual nature: both a formidable threat and an indispensable defence. By embracing AI's potential while proactively addressing its risks, organizations can strike the balance needed to thrive in an era where innovation and security must go hand in hand.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X