Artificial Intelligence: The Next Frontier in Information Security


Artificial Intelligence (AI) is creating a brand new frontier in information security. Systems that independently learn, reason and act will increasingly replicate human behavior. However, like humans, they will be flawed, but capable of achieving incredible results.

AI is already finding its way into many mainstream business use cases and business and information security leaders alike need to understand both the risks and opportunities before embracing technologies that will soon become a critically important part of everyday business. Organizations use variations of AI to support processes in areas including customer service, human resources and bank fraud detection. However, the hype can lead to confusion and skepticism over what AI actually is and what it really means for business and security. 

What Risks Are Posed by AI?

As AI systems are adopted by organizations, they will become increasingly critical to day-to-day business operations. Some organizations already have, or will have, business models entirely dependent on AI technology. No matter the function for which an organization uses AI, such systems and the information that supports them have inherent vulnerabilities and are at risk from both accidental and adversarial threats. Compromised AI systems make poor decisions and produce unexpected outcomes.

Simultaneously, organizations are beginning to face sophisticated AI-enabled attacks – which have the potential to compromise information and cause severe business impact at a greater speed and scale than ever before.  Taking steps both to secure internal AI systems and defend against external AI-enabled threats will become vitally important in reducing information risk.

While AI systems adopted by organizations present a tempting target, adversarial attackers are also beginning to use AI for their own purposes. AI is a powerful tool that can be used to enhance attack techniques, or even create entirely new ones. Organizations must be ready to adapt their defenses in order to cope with the scale and sophistication of AI-enabled cyber-attacks.

Defensive Opportunities Provided by AI

Security practitioners are always trying to keep up with the methods used by attackers, and AI systems can provide at least a short-term boost by significantly enhancing a variety of defensive mechanisms. AI can automate numerous tasks, helping understaffed security departments to bridge the specialist skills gap and improve the efficiency of their human practitioners. Protecting against many existing threats, AI can put defenders a step ahead. However, adversaries are not standing still – as AI-enabled threats become more sophisticated, security practitioners will need to use AI-supported defenses simply to keep up.

The benefit of AI in terms of response to threats is that it can act independently, taking responsive measures without the need for human oversight and at a much greater speed than a human could. Given the presence of malware that can compromise whole systems almost instantaneously, this is a highly valuable capability.

The number of ways in which defensive mechanisms can be significantly enhanced by AI provide grounds for optimism, but as with any new type of technology, it is not a miracle cure. Security practitioners should be aware of the practical challenges involved when deploying defensive AI.

Questions and considerations before deploying defensive AI systems have narrow intelligence and are designed to fulfil one type of task. They require sufficient data and inputs in order to complete that task. One single defensive AI system will not be able to enhance all the defensive mechanisms outlined previously – an organization is likely to adopt multiple systems. Before purchasing and deploying defensive AI, security leaders should consider whether an AI system is required to solve the problem, or whether more conventional options would do a similar or better job.

Questions to ask include:

  • Is the problem bounded? (i.e. can it be addressed with one dataset or type of input, or does it require a high understanding of context, which humans are usually better at providing?)
  • Does the organization have the data required to run and optimize the AI system?

Security leaders also need to consider issues of governance around defensive AI, such as:

  • How do defensive AI systems fit into organizational security governance structures?
  • How can the organization provide security assurance for defensive AI systems?
  • How can defensive AI systems be maintained, backed up, tested and patched?
  • Does the organization have sufficiently skilled people to provide oversight for defensive AI systems?

AI will not replace the need for skilled security practitioners with technical expertise and an intuitive nose for risk. These security practitioners need to balance the need for human oversight with the confidence to allow AI-supported controls to act autonomously and effectively. Such confidence will take time to develop, especially as stories continue to emerge of AI proving unreliable or making poor or unexpected decisions.

AI systems will make mistakes – a beneficial aspect of human oversight is that human practitioners can provide feedback when things go wrong and incorporate it into the AI’s decision-making process. Of course, humans make mistakes too – organizations that adopt defensive AI need to devote time, training and support to help security practitioners learn to work with intelligent systems.

Given time to develop and learn together, the combination of human and artificial intelligence should become a valuable component of an organization’s cyber defenses.

The Future is Now

Computer systems that can independently learn, reason and act herald a new technological era, full of both risk and opportunity. The advances already on display are only the tip of the iceberg – there is a lot more to come. The speed and scale at which AI systems ‘think’ will be increased by growing access to big data, greater computing power and continuous refinement of programming techniques. Such power will have the potential to both make and destroy a business.

AI tools and techniques that can be used in defense are also available to malicious actors including criminals, hacktivists and state-sponsored groups. Sooner rather than later these adversaries will find ways to use AI to create completely new threats such as intelligent malware – and at that point, defensive AI will not just be a ‘nice to have’. It will be a necessity. Security practitioners using traditional controls will not be able to cope with the speed, volume and sophistication of attacks.

To thrive in the new era, organizations need to reduce the risks posed by AI and make the most of the opportunities it offers. That means securing their own intelligent systems and deploying their own intelligent defenses. AI is no longer a vision of the distant future: the time to start preparing is now.

Copyright 2010 Respective Author at Infosec Island via Infosec Island Latest Articles "https://ift.tt/33SoBMw"

Comments

Popular posts from this blog

Evernote cuts staff as user growth stalls

The best air conditioner

We won't see a 'universal' vape oil cartridge anytime soon