Skip to content

Hit enter to search or ESC to close

Cybercriminals are innovative; they must be able to tackle modern security technologies. One of the areas of innovation in cybercrime is the use of artificial intelligence (AI), with cybercriminals turning to AI tech to create new ways of launching sophisticated cyber-attacks. A 2022 research paper into AI-driven cyberattacks found that 56% of attacks used AI during the access and penetration phase. The paper concludes that “organizations must invest in AI cybersecurity infrastructures to combat these emerging threats.”

AI is becoming an integrated part of the world’s technology stack, so knowing how it can be used and misused is essential. Here’s a look at some of the latest AI-enabled cyberattacks and what can be done to stop them.

AI in the Machine

AI-enabled cyber threats perpetuate the human-centric nature of successful cyber-attacks. Data from Statistica shows how cybercriminals are using AI to launch sophisticated attacks. Notably, like conventional cyber-attacks, the focus is on human-entered attacks, i.e., impersonation and spear-phishing attacks (68%). 

However, other core areas of AI-enabled cyber threat development show that cybercriminals are using the technology to develop their toolkits further to continue exploits once a spear-phishing attack is successful. For example, 57% of scenarios involve using AI to create more effective ransomware. 

Ransomware is already a severe problem: the 2023 Verizon Data Breach Investigations Report (DBIR) points out that ransomware is present in 62% of all incidents committed by organized cybercriminals. However, the possibility of AI-powered ransomware is extraordinarily concerning and highly challenging to counteract. Predictions on the use of AI in ransomware attacks expect that cybercriminals will use AI to automate attack patterns, allowing them to move from a more focused target to a much more comprehensive range of targets with little effort on the part of the hacker. A recent interview with security expert Mikko Hyppönen on the likelihood of AI-enabled ransomware attacks talks about how hacking groups will employ AI talent to automate the processes behind ransomware delivery. Procedures such as phishing and code development, now manual, will be automated by machines.

A worrying outcome is that AI will form the basis of AI-enabled attack chains, involving highly effective phishing campaigns to open a network to deliver powerful ransomware. 

Some Examples of How Cybercriminals are using AI 

AI has many applications in the murky world of the cybercriminal. But some examples of its use or potential use can offer an insight into the breadth of applications:

Generative AI-enabled Malware: an AI-generated worm known as WormGPT is a malicious version of the infamous ChatGPT. WormGPT is available on dark web forums, offering cybercriminals a new tool to create believable human text that can be used in phishing campaigns. The tool is trained on a massive range of data that generates the content needed to execute an effective phishing campaign. The developers of WormGPT boast of benefits such as writing a convincing email that can be used in a business email compromise (BEC) scam. The impact of this type of AI-enabled cyber-attack is that it removes barriers in the creation of convincing phishing emails or texts and automates massive campaigns.

Automated personalization of phishing and other scams: One of the other aspects of AI-generated phishing content is that it can target individuals with personalized phishing, scams, and other malicious attacks. Personalized messages are the trick in the tail of spear phishing and other targeted attacks. Generative AI models, such as ChatGPT, match tone and content to specific roles, such as a CEO. The use of these AI systems in producing convincing content that appears to come from a legitimate source, like a CEO, is a likely use of the technology. AI is already being used to personalize and customize messages to improve customer relationships, so its use in malicious campaigns will no doubt follow.

Scams and other financial crimes: deepfakes are an AI-enabled technology that creates fake video and voice. Deepfake tech is already widely available, with companies like Tencent Cloud offering Deepfakes-as-a-Service (DFaaS) for $145 per video. Hackers could use DFaaS to commit BEC scams, sextortion, and financial crimes involving Know Your Customer (KYC). AI can also be used to generate believable but fake invoices along with fake transaction records. In these circumstances, AI provides automation and personalization to launch sophisticated cyber-attacks.

 

How can Organizations turn the Tables on AI-driven Cyber-attacks?

The same AI breakthroughs that cybercriminals are using to help them hack humans and computers also allow organizations to fight back. AI-enabled cybersecurity solutions help to prevent cyber-attacks by fighting cybercriminals at their own game. 

AI-enabled cybersecurity solutions use the same predictive models and automation that hackers are exploiting, but for good reasons. TitanHQ empowers our cybersecurity solutions by integrating AI and automation. For example, our DNS filtering solution, WebTitan, provides malicious website detection and zero-hour protection. WebTitan can maintain real-time threat response to malicious content using a URL Classification database augmented with AI-driven threat data from over 650 million end users. The use of AI makes cybersecurity tools responsive and adaptive.

Security awareness training is another essential layer of protection against AI-enabled and sophisticated attacks. Employees trained to recognize phishing and social engineering attempts are less likely to become part of a security incident. Using behavior-driven security awareness training, an organization empowers its employees to handle even the most sophisticated human-centric threats. SafeTitan works with an organization to educate employees about all forms of phishing. SafeTitan successfully reduces the threat of security, with a 92% average improvement in staff security awareness and a substantial reduction in phishing susceptibility levels from 30% to 2% within 12 months.

To see how TitanHQ can prepare your organization to stop AI-based cyber-attacks, check out our AI-enabled DNS Filtering solution and behavior-based security awareness training solution, SafeTitan.

 

Talk to our Team today

Talk to our Team today