Software Contract Solutions

AI in cyber security: a necessity or too early to introduce?

The threats against organizations are growing in volume and success, but can AI in cybersecurity stop the rot and turn failure into success?

There is a list of growing cyber security threats, ranging from a rise in identity thefts and account takeovers to vindictive ransomware strains. Businesses are feeling the strain, especially Fortune 500 enterprises, who have massive stores of data. Because of this, they have become attractive to bad actors who want to try and take over that honeypot.

But, all is not lost.

AI in cyber security, while not a silver bullet, can help improve an organisation’s overall cyber security posture — if they get the security basics right in the first place (firewalls, data encryption etcetera).

AI in cyber security

New malware is constantly being generated, so it is incredibly difficult to recognise, let alone defend against. AI is able to look at all these various of malware — some predict it’s around 800 million different strains — and see certain patterns; this new malware has a similar code to X, Y, Z etcetera. The technology is useful in future-proofing organisations against new malware,” confirms Labhesh Patel, CTO and chief scientist at Jumio.

AI is also excellent at detecting anomalies; identifying patterns that do not match existing patterns of behaviour. It can alert an organisation if a malicious strain has entered the network quickly. This is a huge asset, because in the past malware could roam undetected for months, even years (Yahoo), harvesting data and generating significant revenue for the hackers.

Does it completely solve the problem? No.

Both the good guys and bad guys are using AI. But, what AI does — along with basic cyber hygiene — is help make sure organisations do not fall prey to traditional types of attack.

 

AI: mitigating the insider threat as well?

The insider threat — whether intentional or not — is the single biggest cause of organisation vulnerability; clicking on a phishing email is a classic example. Employees need to undergo extensive and frequent cyber security awareness training. AI can help here as well — it can look at the pattern of internal computer usage from different data sources (individuals) within an enterprise. For example, if it’s 2am in the morning and an employee is unusually logged into the network and downloading some internal files, the AI can quickly see this is anomalous behaviour and take the appropriate steps.

“AI systems are becoming as good as human experts,” claims Patel, except they never sleep or take a holiday.

Implementing AI in cyber

AI shouldn’t be implemented for the sake of it. But, when should it be applied? Patel thinks that if there is a human expert who can do a certain task, but it takes them a long time to achieve it, AI can help. “Humans are very good at recognising patterns and software is really good at following rules,” explains Patel. “You can teach a machine to behave like a human, and the more data it has, the better it gets at its job.”

Organisations want to embark on a cognitive journey and sometimes, they don’t care if AI fits into a particular use case. This is absolutely the wrong approach. To implement AI in cyber security (in anything), there has to be a use case and there has to be a strong data set, for supervised algorithms especially.

Access to data is a significant challenge in implementing AI in cyber. A lot of systems, especially in larger Fortune 500 companies, have multiple data silos. For AI to work, it needs access to those silos to train the algorithms with that data, while complying with regulations and maintaining strong ethics when handling particularly sensitive data.

“You cannot just take sensitive data and personal data from people and start training algorithms immediately, you have to have the right consents in place, which is something that a lot of companies just gloss over,” says Patel.

Why AI in cyber has yet to take off yet

Dr Leila Powell, lead security data scientist from Panaseer, agrees that “the key challenge for most security teams right now is getting hold of the data they need in order to get even a basic level of visibility on the fundamentals of how their security program is performing and how they measure up against regulatory frameworks like GDPR. This is not a trivial task!

“With access to security relevant data controlled by multiple stakeholders from IT to MSSPs and tool vendors there can be a lot of red tape on top of the technical challenges of bringing together multiple siloed data sources. Then there’s data cleaning, standardisation, correlation and understanding — which often require a detailed knowledge of the idiosyncrasies of all the unique datasets.

“As it stands, once all that work has gone in to data collection, the benefits of applying simple statistics cannot be underestimated. These provide plenty of new insights for teams to work through — most won’t even have the resources to deal with all of these, let alone additional alerting from ML solutions. Until the general level of organisational maturity in the area of data-driven security increases, the applications of machine learning will likely be restricted to siloed use cases – we need to walk before we can run!”


Security: day 1

The fundamental nature of security has changed and security needs to be built in from day one. Organisations need to starting thinking about how to move security to the forefront of the software lifecycle development.

We’re not living in ancient world, where organisations develop software and then security comes in later. Instead, businesses need the right skills in the organisation so that every developer, before they start to write a single line of code, understands the security posture and are thinking about security as a first-class citizen.

Micro-services

There are also infrastructure choices that make life very difficult for hackers. There is a certain architectural paradigm called micro-services, which is bunch of modules, but each of them is doing something very simple. When you have very simple services that are constantly talking to each other, securing them becomes much easier because the services, by themselves, are not doing much, they don’t have a very large attack area. In fact, the only thing organisations need to secure is the communication between the micro-services.

With continuous development and continuous deployment of this architectural paradigm, it’s really hard for hackers to target a moving window of software.

But, the problem persists because even now, a lot of enterprises have monolithic software — a gigantic software package that hasn’t changed for a long time, one which hackers have a lot of time to look at and try to figure out how to attack it.

With micro-services, there could be 80 different deployments in one day on the same service and hackers can’t figure out how to attack it, because it morphs so rapidly.

There are many ways to secure your defences. Is AI even necessary?

Should organisations be handing security over to AI?

Building security into software from the get-go, deploying micro-services, looking at global applications (rather than just locally), implementing firewalls and encrypting data. All of these basics security functions will help organisations fight off the growing cyber threat. So, where does AI fit in. We’ve seen that it can help with detecting anomalies, help mitigate the cyber threat and identify new strains of existing threats.

But, is it right to trust in the technology right now? Colin Truran, principal technology strategist at Quest, doesn’t think so and questions whether organisations should be handing over their security to AI.

“Are we ready or able to hand over control of security to AI in our environments and as a result become too complacent in trusting such a technology which in turn creates further vulnerabilities? The problem is all new technologies and concepts is by their nature they have had very little time to be fully proven. As a result, we will see many false claims and poorly executed implementations where those that suffer quickly blame the technology rather than their lack of understanding.

“To avoid the pitfalls that dogged the early adopters of big data and blockchain, organisations must start off small, run in parallel and built of the expertise not just within their own organisation but with all by being open and sharing what works and doesn’t work. That’s a big ask for a community where none-disclosure is part of the defense strategy.”

 

This article originally appeared on Information-Age.

Share