AI and cybersecurity: marketing pitch or true progress?
Tech culture
The measured contribution of artificial intelligence
For now, artificial intelligence’s contribution to cybersecurity solutions is limited to the detection of advanced threats. AI isn’t able to automate incident response yet. The reliability of algorithmic models is another obstacle to its widespread use.
The growing use of AI
“AI inside”. Today, almost all suppliers (software publishers, equipment manufacturers, integrators) claim to include artificial intelligence technology in their software or digital services.
The world of cybersecurity is no exception. A growing number of specialists are talking about the benefits of using machine learning or deep learning. Many publishers, especially Anglo-Saxon, have made this their main differentiating feature.
The arguments put forward by the proponents of AI are convincing:
● In view of the exponential increase in the volumes of data to be analyzed – logs, IP addresses…. – they can no longer be processed manually.
● AI-powered solutions will take over from conventional protection systems that are overwhelmed by the volume, variability, speed and complexity of cyberattacks.
The current state of the threat would require a high degree of automation to provide near-real-time detection of suspicious events.
The statistical approach to AI: bridging the skills gap
The statistical approach of AI puts an old spin on the traditional view of conventional cybersecurity, which is that once a virus, malware or ransomware is known, its signature is created in order to block it.
Thanks to AI, the threat no longer needs to be known to counter it, nor is there a need to multiply patches and updates. It will analyze a file from every angle and assign it a trust rating. Below that rating, it’s quarantined. This statistical approach would make it possible to better anticipate the new forms of threat, the different evolutions of malicious code, and even to block “zero day“.
Furthermore, the use of AI would make up for the structural deficit in cyber skills by relieving experts of repetitive, time-consuming and non-value-adding tasks so that they can concentrate on the things that really deserve their attention.
According to a recent Fortinet report, there’s a shortage of some 2.72 million cybersecurity experts worldwide.
AI to keep abreast of cybercriminals
Today, companies must also acquire the same weapons as the cybercriminals who are always one step ahead. Hackers are already making massive use of AI technology to identify system vulnerabilities, crack passwords, fill in Captcha, personalize phishing campaigns or create deep fakes.
The highly popular ChatGPT conversational agent has even been hijacked to write malicious code.
But beware of buzzwords, there’s often a big gap between marketing rhetoric and reality.
AI is still a decision-making tool
To date, AI has especially shown its relevance in the detection of suspicious behavior and weak signals in the ocean of data to be analyzed.
Self-teaching, the system protects from cyberthreats by using a history of behavioral patterns. Any changes in those habitual patterns are analyzed and isolated.
For example, an employee has logged on from abroad at an unusual time, using an unregistered terminal. The system passes the alert on to an expert, who decides whether to raise it or investigate further to decide whether or not to take corrective action.
While AI can still help analysts by contextualizing the suspicious event through the cross-referencing of different sources of information, its role currently stops there.
We’re still a long way from an omniscient AI capable of responding to incidents and carrying out ad hoc remediation actions on its own.
False positives and the “black box” effect
If the rise of AI in protection systems is in line with history, it will be gradual. For the time being, its widespread use is hampered by the trust placed in it.
A self-teaching system means false positives. Now, according to the expert Dane Sherret, solution architect at HackerOne, “AI-generated vulnerability reports reveal many false positives which add further friction and represent extra work for security teams.”
In other words, the opposite of the objective. Once again, this finding invalidates the possibility of stand-alone AI. Organizations can’t afford to have their businesses paralyzed by false positives.
AI then raises the question of the transparency of so-called “black box” algorithms of the deep learning type. By what mechanisms does a model obtain that result from this input data? The model mustn’t only be explainable, it must also be robust. Otherwise, hackers can deliberately pervert it by injecting toxic data (data poisoning), turning the self-teaching system into a very leaky bucket.
An AI-human tandem
For Dane Sherret, “the evolution of the technological landscape will always require a human element”, and “AI will never be able to match or surpass the expertise of a global community of hackers”.
He also points out that AI requires human supervision and manual configuration to work properly. It then needs to be trained on the latest data, considering the constant evolution of threats and the exploitation of new attack vectors.
Dane Sherret believes that “data that’s only a year old can make stand-alone solutions much less effective, and over-reliance on analytics tools is already making many companies vulnerable”.
So while AI won’t replace cybersecurity experts, it can enhance their capabilities. “Today, we see ethical hackers using AI to help them write vulnerability reports, generate code samples, as well as identify trends in large datasets.” In short, AI at the service of Man.