First published at https://www.mailguard.com.au/blog/ai-vs-ai on 09 May 2018 - updated edition below
"The imperative to do AI research is now not optional. We have to build better and better AI security systems because you can bet your life the criminal elements of the hacking world are busy developing AI weapons to attack us..." - Bill Rue.
Artificial Intelligence is one of those subjects that is constantly in the headlines at the moment; controversial, endlessly debated and widely misunderstood.
AI promises to be a game-changer for cyber-security, so I sat down with Bill Rue while he was Chief Technology Officer at MailGuard, and asked him to give me his perspective on why AI is such an important technology.
Bill Rue's extensive experience in the IT world included roles as Technology Strategist for Microsoft as well as experience with military technology systems, so his insights into AI development and cybersecurity are based on real-world experience with user-facing technology.
Interview with Bill Rue
EM: Bill, how do you think AI is going to change the cybersecurity landscape?
Bill Rue: That’s kind of the unanswerable question, because AI is still a technology in its infancy, and we really can’t predict in any sort of realistic way what it might look like in even 3 to 5 years from now. Futurists trying to predict what disruptive technology will look like rarely get it right. Having said that, there’s real concern amongst some scientists and technology thinkers that future AI could be potentially weaponised; turned against us. There’s a lot of speculation about very powerful AI machines becoming self-aware and humans losing control of them, but even if we disregard that more ‘science-fiction’ sort of speculation, there are still other ways that AI could be a security issue.
EM: So there’s an inherent problem with AI that it could be used as a weapon as well as a tool?
Bill Rue: At the software level, even simple software can be weaponised. We know that because we’ve already seen it happen in cybersecurity incidents like; NotPetya; like Wannacry.
The software doesn’t have to be intentionally malicious to be dangerous. Most cyber-attacks at the moment start with infiltration - like malware being delivered via an email - because hijacking systems are more efficient than building new weapons. It’s basically a lot easier for criminals or terrorists to grab systems that already exist and take control of them than build stuff of their own.
AI is just technology and primitive AI is already within the reach of regular people now. There are open-source AI platforms being built by big companies that malicious actors can download and exploit.
Even governments can see the value of that sort of ‘hacker’ approach to weaponising technology. Recently we saw that example from Wikileaks where the CIA hoarded exploits that they thought might be useful as weapons and then those exploits were used by criminals when they got into the wild.
EM: Can cybercriminals exploit AI to make hacking and infiltration easier?
Bill Rue: What we should be concerned about is malicious actors getting hold of models of our security systems and training their AI to defeat it. That’s the main reason we are now committed to an ongoing AI arms race. Cybercriminals want to use AI to attack just like we in the security world want to use it to defend. AI builds a model of a problem and gets better and better at achieving its goals. So, before they send their AI-based attack out to the target they will create a sandbox environment and train their AI to understand the weaknesses of our defences.