The Pentagon Is Bolstering Its AI Systemsby Hacking Itself
The Pentagon sees artificial intelligence as a way to outfox, outmaneuver, and dominate future adversaries. But the brittle nature of AI means that without due care, the technology could perhaps hand enemies a new way to attack.
The Joint Artificial Intelligence Center, created by the Pentagon to help the US military make use of AI, recently formed a unit to collect, vet, and distribute open source and industry machine learning models to groups across the Department of Defense. Part of that effort points to a key challenge with using AI for military ends. A machine learning âred team,â known as the Test and Evaluation Group, will probe pretrained models for weaknesses. Another cybersecurity team examines AI code and data for hidden vulnerabilities.
Machine learning, the technique behind modern AI, represents a fundamentally different, often more powerful, way to write computer code. Instead of writing rules for a machine to follow, machine learning generates its own rules by learning from data. The trouble is, this learning process, along with artifacts or errors in the training data, can cause AI models to behave in strange or unpredictable ways.
âFor some applications, machine learning software is just a bajillion times better than traditional software,â says Gregory Allen, director of strategy and policy at the JAIC. But, he adds, machine learning âalso breaks in different ways than traditional software.â
A machine learning algorithm trained to recognize certain vehicles in satellite images, for example, might also learn to associate the vehicle with a certain color of the surrounding scenery. An adversary could potentially fool the AI by changing the scenery around its vehicles. With access to the training data, the adversary also might be able to plant images, such as a particular symbol, that would confuse the algorithm.
Allen says the Pentagon follows strict rules concerning the reliability and security of the software it uses. He says the approach can be extended to AI and machine learning, and notes that the JAIC is working to update the DoDâs standards around software to include issues around machine learning.
âWe donât know how to make systems that are perfectly resistant to adversarial attacks.â
Tom Goldstein, associate professor, computer science, University of Maryland
AI is transforming the way some businesses operate because it can be an efficient and powerful way to automate tasks and processes. Instead of writing an algorithm to predict which products a customer will buy, for instance, a company can have an AI algorithm look at thousands or millions of previous sales and devise its own model for predicting who will buy what.
The US and other militaries see similar advantages, and are rushing to use AI to improve logistics, intelligence gathering, mission planning, and weapons technology. Chinaâs growing technological capability has stoked a sense of urgency within the Pentagon about adopting AI. Allen says the DoD is moving âin a responsible way that prioritizes safety and reliability.â
Researchers are developing ever-more creative ways to hack, subvert, or break AI systems in the wild. In October 2020, researchers in Israel showed how carefully tweaked images can confuse the AI algorithms that let a Tesla interpret the road ahead. This kind of âadversarial attackâ involves tweaking the input to a machine learning algorithm to find small changes that cause big errors.
Dawn Song, a professor at UC Berkeley who has conducted similar experiments on Teslaâs sensors and other AI systems, says attacks on machine learning algorithms are already an issue in areas such as fraud detection. Some companies offer tools to test the AI systems used in finance. âNaturally there is an attacker who wants to evade the system,â she says. âI think weâll see more of these types of issues.â
A simple example of a machine learning attack involved Tay, Microsoftâs scandalous chatbot-gone wrong, which debuted in 2016. The bot used an algorithm that learned how to respond to new queries by examining previous conversations; Redditors quickly realized they could exploit this to get Tay to spew hateful messages.
Tom Goldstein, an associate professor at the University of Maryland who studies the brittleness of machine learning algorithms, says there are many ways to attack AI systems, including modifying the data an algorithm is fed in order to make it behave in a particular way. He says machine learning models differ from conventional software because gaining access to a model can allow an adversary to devise an attack, such as a misleading input, that cannot be defended against.
âWe don't really know how to solve all the vulnerabilities that AI has,â Goldstein says. âWe donât know how to make systems that are perfectly resistant to adversarial attacks.â
In the military context, where a well-resourced, technically advanced adversary is a given, it may be especially important to guard against all sorts of new lines of attack.
A recent report from Georgetown Universityâs Center for Security and Emerging Technology warns that âdata poisoningâ in AI may pose a serious threat to national security. This would involve infiltrating the process used to train an AI model, perhaps by having an agent volunteer to label images fed to an algorithm or by planting images on the web that are scraped and fed to an AI model.
The author of the report, Andrew Lohn, applauds the JAIC for creating a team dedicated to probing AI systems for vulnerabilities. He warns that it will be more difficult to secure the machine learning pipeline for AI models that come from the private sector, because it may not be clear how they are developed. It may also be challenging to identify data designed to poison an AI model because the modifications may not be obvious or visible to the human eye.
The Pentagon is, of course, likely to develop its own offensive capabilities to reverse engineer, poison, and subvert adversariesâ AI systems, Lohn says. For the moment, though, the focus is ensuring America's military AI canât be attacked. âWe can have the offensive option,â he says. âBut letâs just make sure it can't be done against us.â Allen, the JAIC official, declined to comment on whether the US is developing offensive capabilities.
Many countries have developed national AI strategies to ensure their economies make the most of a powerful new technology. At the same time, big tech companies in the US and China especially are vying for advantage in commercializing and exporting the latest AI techniques.
Allen says having a technical edge in AI will also be a strategic advantage to nation states. The algorithms that keep the military supply chain going or feed into mission critical decisions will need to be protected.
"When you're operating at mind-blowing scale, and you're operating incredibly technologically complicated systems in situations that are often at life and death, you need some kind of deep technical excellence to ensure that your systems are going to perform as intended," he says.
More Great WIRED Stories
0 Response to "The Pentagon Is Bolstering Its AI Systemsby Hacking Itself"
Post a Comment