0800-31-0700 for new subscribers
0800-31-0800 technical support

AI in cybersecurity: how artificial intelligence protects and attacks the digital world

Home /

Blog

/

AI in cybersecurity: how artificial intelligence protects and attacks the digital world

AI in cybersecurity: how artificial intelligence protects and attacks the digital world

29.06.2025

Internet

24

Cybersecurity has changed dramatically with the emergence of new artificial intelligence (AI) technologies. The creation of the latest AI-based monitoring systems has significantly improved threat detection and response, as new algorithms have the ability to process data in real time. At the same time, the latest AI systems are being actively used by cybercriminals to carry out attacks. Therefore, let's look at how AI is transforming security and threat in the digital environment from both sides.

The role of AI in the security of the digital environment

First, let's explore the possibilities of AI-enabled data protection in the virtual world. AI-based intrusion detection systems often detect and neutralise threats before they cause harm.

1. Detecting and preventing cyber threats.

Artificial intelligence in cybersecurity technology works roughly like an attentive security guard in an office:

  • It studies what is «normal». AI analyses habitual behaviour on the system: who goes where, what files are opened, from what devices, at what time.
  • Looks for strange activities. When some process is very different from normal (for example, someone tries to copy all files at night), AI sees this as a signal of a possible threat.
  • Uses examples from the past. AI trains on real-life examples of viruses, hacker attacks, and malware. It remembers what they look like and can recognise similar ones in the future.
  • Fast response time. When real-time AI detects something suspicious, it automatically alerts experts or even blocks an action (e.g. stopping a suspicious programme).
  • Constantly learning. The more information an AI receives, the better it is at recognising normality and threat. AI models track new threats and improve algorithms through learning from them, allowing it to adapt to new tactics of attackers.

For example, Darktrace has experience using AI to detect cyber threats in real time. Its system, even with the continuous evolution of attack methods, has the ability to analyse tens of thousands of data points per second and detect suspicious activity. It automatically responds to the threat by blocking suspicious traffic or restricting access to resources, which in turn helps reduce system losses.

2. Defence against phishing attacks.

One of the most common forms of cyberattacks is phishing, which aims to steal personal information, passwords or financial data. AI can automatically detect phishing emails through their content, metadata and similarity to previously received emails. AI systems are not only able to detect template phishing emails, but also recognise more sophisticated ways of deception thanks to the patterns it has trained.

Google Safe Browsing, for example, is a programme from Google that uses AI to identify connections of phishing sites and other dangerous web resources. It has a system of continuous monitoring of new threats and designs advanced defence algorithms, so when a user tries to visit a suspicious site, the system will give him a warning about the potential danger.

3. Proactive monitoring and response.

AI systems can also act as “digital patrols”, constantly monitoring the network and identifying potential vulnerabilities or unauthorised access attempts. AI can detect security weaknesses such as incorrect access settings or outdated software solutions in a timely manner and recommend or automatically implement fixes.

Vectra AI, which develops AI-based systems for cyber defence, uses deep learning to proactively detect cyber attacks in real time. It helps organisations identify even highly sophisticated attacks, such as insider threats or campaigns with workarounds.

AI's limitations in cybersecurity

Despite AI's broad capabilities regarding security in the digital world, it cannot prevent all types of cyberattacks. In particular, among such modern threats are:

  • Social engineering attacks. AI can help detect phishing emails, but is not always able to recognise well-disguised deceptive schemes, especially those that manipulate human psychology (e.g. when an attacker impersonates a company executive).
  • Unknown (zero-day) vulnerabilities. AI can help detect anomalies, but it is not always an effective tool for predicting cyberattacks or stopping them when attackers exploit very new, unexplored vulnerabilities.
  • Artificial limitations. AI systems themselves may contain vulnerabilities, be misconfigured or limited in their ability to analyse encrypted or non-standard traffic.
  • Malicious use of AI. Unfortunately, AI is also used by hackers to bypass defences — for example, by creating adaptive malware that changes its behaviour to avoid detection. This will be discussed in more detail next.

Ways cybercriminals are using AI

As we have already mentioned, AI finds use not only for defensive purposes, but also for malicious purposes. Modern online attacks using AI are more sophisticated, adaptive, and large-scale. These include:

1. Personalised phishing campaigns.

Phishing in the virtual world is a type of online fraud that aims to trick users out of confidential information (passwords, bank card numbers, personal data). To do this, attackers pretend to be a trustworthy organisation or person using deceptive emails, SMS messages or fake websites.

With the advent of AI, it is now possible to create hyper-personalised messages that analyse a potential victim's social media profiles, their corporate communication and communication style, psychological characteristics and vulnerabilities, and current life or work events. Based on this information, messages are generated that are almost impossible to distinguish from real emails from friends, colleagues or partners.

2. Infrastructure attacks via AI.

Attackers have learnt to use AI to target infrastructure that is based on networks and cloud technologies. For example, in performing massive distributed denial of service (DDoS) attacks, where attacker systems adapt to the defence mechanisms that are used on target servers. Or DoS attacks, which, unlike the previous ones, are narrowly targeted, and because of which the server may also become inaccessible to users.

For example, a hacker group known as APT28 or Fancy Bear was active during the Russian-Ukrainian war. It was also involved in attacks on Hillary Clinton's 2016 presidential campaign. According to Microsoft's investigation, the criminals used AI to launch sophisticated cyberattacks against government agencies and media organisations, in particular to bypass intrusion detection systems and create adaptive attack strategies that constantly change.

3. Deepfake attacks through social engineering.

AI is capable of analysing data from social networks, learning a person's communication style and adapting its messages to make them look extremely believable. This makes it possible to create fake images, video messages or audio calls that are hard to distinguish from real ones. Such deepfakes are aimed at manipulating people to gain access to sensitive information, resources, or spread disinformation.

Attacks of this type can confuse company employees or even compromise government officials. In 2024, criminals persuaded a bank employee to transfer $35 million to their account through a simulated video call with management.

4. Attacks on machine learning systems.

With the active involvement of AI in business processes, attacks directly on machine learning algorithms, such as:

Data poisoning attacks. These attacks are carried out at the AI model training stage by covertly introducing malicious data into the process. This results in a distorted AI model and very high losses.

Evasion attacks. These are attacks where the attacker intentionally modifies the incoming data slightly so that it looks like normal data, but at the same time can deceive the AI model or bypass the defence.

Model extraction attacks. The goal of these attacks is to reproduce or steal the AI model by interacting with it (e.g., through an API). The attacker does not have direct access to the model, but tries to modify its parameters, structure or behaviour by sending it a large number of requests and analysing the received responses.

Adversarial attacks. They are carried out at the stage of using an already trained AI model for the purpose of temporary deception (for example, to bypass moderation). They lead to insignificant changes in the incoming data, which threatens to cause erroneous conclusions of the model.

5. Scaled Automated Threats.

One of the biggest advantages for hackers when using AI is the automation and scalability of attacks. Instead of manually picking vulnerabilities or creating malware, attackers can use AI to automatically generate new viruses that adapt to defences, improving their capabilities with each attack.

As an example: AI makes it possible to create botnets — computer networks infected with malware. They self-propagate through social networks, mimic the behaviour of real users and coordinate attacks through decentralised networks.

6. Autonomous cyberattacks with self-learning elements.

When used in this type of attack, AI can learn the behaviour of targets independently and change its tactics according to the information obtained. This allows attackers to create autonomous attack systems that can:

- independently scan networks for vulnerabilities;

- learn from mistakes and improve their effectiveness;

- adapt their attack strategy in real time;

- coordinate actions between different components of the attack.

Such systems can function for weeks without human intervention, constantly improving their methods.

As you can see, artificial intelligence in cybersecurity can be used as a tool not only for data protection, but also for malicious actions. On the one hand, it enables the creation of extremely effective cybersecurity systems that can quickly detect and neutralise threats. On the other, it can help attackers improve the effectiveness of their attacks by making them increasingly complex and harder to detect.

Comments

0

Еще комментарии