Organizations find new ways to relieve some of the burden on security teams while shoring up defenses.
Artificial intelligence and automation adoption rates are rising, and investment plans are high on enterprise radars. AI is in pilots or use at 41% of companies, with another 42% actively researching it, according to the 2019 IDG Digital Business Study.
Cybersecurity has emerged as an ideal use case for these technologies. Digital business has opened a score of new risks and vulnerabilities that, combined with a security skills gap, is weighing down security teams. As a result, more organizations are looking at AI and machine learning as a way to relieve some of the burden on security teams by sifting through high volumes of security data and automating routine tasks.
“We have a lot of repetitive tasks – we can build the right framework so those controls happen automatically to a point where we need a human looking at it,” Ken Foster, head of global cyber risk governance at Fiserv, said on the new CSO Executive Sessions podcast. “So, I can repurpose my smart people who I want making the decisions that I’m not comfortable AI making. If I can get that designed well enough to pull some workload off of them, we’ll start moving the needle faster.”
We asked security leaders and practitioners to describe how AI and automation technologies will come into play this year. Here’s what they had to say.
An ounce of detection…
“2020 needs to be the year where AI in cybersecurity moves beyond the hype and becomes common practice,” says Tim Wulgaert (@timwulgaert), owner and lead consultant, FJAM Consulting.
IT and security leaders suggest that detection and identification of potential threats make ideal initial use cases for AI/automation.
“The volume of data being generated is perhaps the largest challenge in cybersecurity,” says David Mytton (@davidmytton), CTO and expert in residence, Seedcamp. “As more and more systems become instrumented — who has logged in and when, what was downloaded and when, what was accessed and when — the problem shifts from knowing that ‘something’ has happened, to highlighting that ‘something unusual’ has happened.”
That “something unusual” might be irregular user or system behaviors, or simply false alarms.
“The hope is that these systems will minimize false alarms and insubstantial issues (e.g., port scanning or pings), leaving a much smaller set of ‘real’ threats to review and address,” says Michael Overly (@mrolaw), partner, Foley & Lardner LLP.
The ultimate goal is to find those unusual incidents fast.
“The effectiveness of AI solutions this year can be measured via the time-to-discovery metric, which measures how long it takes an organization to detect a breach,” says Kayne McGladrey (@kaynemcgladrey), CISO, Pensar Development. “Reducing time to discovery can be achieved through AI’s tenacity, which doesn’t need holidays, coffee breaks, or sleep, which is unlike Tier 1 security operations center analysts who also get bored reading endless log files and alerts.”
That said, differentiating the usual from the unusual will require correlating technologies around identity and user access.
“Automation will have a huge impact on user access in the coming year,” says Jason Wankovsky (@gomindsight), CTO and VP of consulting services, Mindsight. “Multifactor authentication will certainly be a growth sector. In addition, artificial intelligence will assist system and network administrators in monitoring technology environments.”
Wulgaert agrees: “Behavioral analysis of access patterns and user logs will help to identify potential security events, but can also play a big role in supporting and optimizing multifactor authentication, by adding behavior into the mix of factors. IA will be become a core functionality of IAM-tooling.”
Behavioral analysis will also help defend against common attacks such as malware.
“Malware attacks are only going to get worse this year,” says technology writer Will Kelly (@willkelly). “Because AI-based anti-virus solutions focus on actions, not signatures, they can home in on the unusual behaviors that are the calling cards of malware and zero-day exploits to help mitigate such attacks.”
Human, machine, or both?
While AI and automation will play a critical role in relieving overburdened IT security teams, organizations will still require highly skilled individuals to perform high-level analysis and remediation activities – not to mention the training required for machine learning to be effective.
“We need AI/automation, but we also need humans to teach it and leverage it,” advises Omo Osagiede (@ Borderless_i), director, Borderless-I Consulting Ltd.
Furthermore, the tools must be augmented by human intelligence to make correlations and decisions based on the systems’ output.
“Although automation and machine learning will improve efficiency, human expertise, logical thinking, and creativity will be further valued to deploy and effectively use new technology, as well as deter against emerging threats,” says Caroline Wong (@CarolineWMWong), chief strategy officer, Cobalt.io.
What’s lurking and what’s ahead
There’s a legitimate worry about AI and automation: that “threat actors will also seize automated technology to conduct more widespread, pervasive attacks,” says Wong.
Hackers are already experimenting with these technologies to break through organizational defenses.
“Artificial intelligence is a cybersecurity double-edged sword,” says Robert Siciliano (@RobertSiciliano), Chief Security Architect, ProtectNow. “AI can learn, adapt and hack faster and smarter than current conventional penetration tools or vulnerability scans.”
“Always count on thieves to use any means at their disposal to bypass security controls within an organization. This includes AI, which can aid criminals in analyzing cyber defense mechanisms and behavioral patterns of employees in order to circumvent security,” says Scott Schober (@ScottBVS), CEO, author, cyber expert. “Adversarial machine learning will be used by criminals to fool defensive AI by flooding training models with malicious data.
“AI requires training, including massive amounts of data and simulated attacks. It cannot defend against real threats until it can identify them with some degree of precision,” Schober adds. Only after AI can successfully identify real threats can it begin to effectively defend networks from both human and AI attacks. This approach does not feel very proactive, but it is necessary to win the long game of cyber defense.”
As AI and automation gain traction, expect to see advances that play to that long game of defense.
“Ongoing developments in machine learning and natural language processing will improve our ability to analyse threat actor behaviour within the context of intent, opportunity and capability,” says Osagiede. “However, to really leverage AI-driven improvements in the quality of threat intelligence, automation must also improve the orchestration (or acceleration) of aspects of incident response, freeing up human security analysts to focus on more strategic defence measures.”
Wulgaert adds: “We can expect some major steps forward in data protection. AI can help data protection solutions to support, correct, or even prevent end-user behavior and as such prevent data leakage, unauthorized access, etc. Last but not least, AI will continue to make its mark in threat analysis, and I guess become a minimum requirement for any good cybersecurity threat detection solution.”
This article originally appeared on CIO.