AUSA 2020: Service leaders tackle ‘deception' attacks on AI algorithms

by Carlo Munoz Oct 15, 2020, 08:09 AM

US Army information technology and artificial intelligence (AI) experts are working to address potential vulnerabilities in how the service develops and protects AI...

US Army information technology and artificial intelligence (AI) experts are working to address potential vulnerabilities in how the service develops and protects AI algorithms from potential infiltration, internal manipulation, and outright cyber attacks by adversaries.

“This is a hot topic of conversation … because [infiltration] is something we have to think a lot about” as more and more military platforms leverage AI capabilities, said Jean Vettel, chief scientist of the US Army’s Combat Capabilities Development Command (CCDC). Vettel, who also leads US Army Futures Command’s technology incubator dubbed ‘Team Ignite,’ said the team has directed a large portion of its advanced capabilities work “in the deception space” as it relates to AI algorithm development and fielding.

“If you start thinking about the future battlefield, you start having algorithms … that can rapidly process sensor data,” Vettel said on 14 Oct. “There is a huge component of that, once things get moved to algorithms, that the idea of spoofing the data, or having infiltration into the network that actually modifies the algorithm” becomes a real concern, she told reporters during a briefing at the Association of the US Army’s annual symposium.

Such a spoofing attack would entail an adversary introducing some form of malware to modify the algorithm, where it stops learning from gathered or inputted data and “learns in a negative way” that makes the algorithm and associated platforms malfunction.

Already a Janes subscriber? Read the full article via the Client Login
Interested in subscribing, see What we do