Adversarial AI and exploiting machine learning models
Artificial intelligence (AI) is a technology that presents great opportunities for many organisations and society as a whole. Many institutions are incorporating AI into their business processes to find efficiencies, improve their decision making, and offer better end user experiences. Even though AI attack surfaces are emerging, future security strategies should take account of adversarial AI, with the emphasis on engineering resilient modelling structures and strengthening against attempts to introduce adversarial manipulation.
As adversarial AI has emerged over the past five years, Accenture has seen an increasing number of adversarial attacks exploiting machine learning models. Such exploitation could multiply with the magnitude of threats facing organisations. As adversaries benefit from efficiencies gained through AI and machine learning, the return on investment for their malicious activities may increase. The ability to use autonomous target reconnaissance and vulnerability exploitation could decrease the turnaround time for campaigns for both well-resourced and less-skilled cyber adversaries. The ability to authenticate data and validate its integrity may be challenged by the adversarial application of AI, fracturing the basis of trust across many institutions through data theft, manipulation and forgery.
New attacks may also arise using AI systems to complete tasks that would be otherwise impractical for humans. Malicious adversaries may exploit the vulnerabilities of AI systems deployed by defenders—an important point to remember as information security teams construct their organization’s threat models.
When considered on its own or coupled with other threats that are increasing in frequency and potency, the malicious application of AI could be a linchpin for both financially and politically motivated adversaries throughout the many phases of their campaigns.