During the Second World War, philosopher Theodor Adorno foresaw that the mechanical death Nazi Germany rained down upon England – the V1 and V2 rockets – marked a significant break with past forms of warfare. “Hitler’s robot-bombs,” he wrote in his book Minima Moralia, “like Fascism itself … career without an object. Like it, they combine utmost technical precision with total blindness. And like it they arouse terror and are
wholly futile.”
Eighty years later, the robots are everywhere, and we are slowly rediscovering the terror. Artificial intelligence shows no signs of ushering in the promised social utopia, nor of meaningfully improved the quality of life for anyone beyond the billionaire owners of the large language models. Many of the most obvious applications for AI are to surveil, control, target, or kill. Defence professionals had best learn to live and fight in a world that includes artificial intelligence as one of its primary problems.
The symmetrical response to artificial intelligence will be to acquire bigger, badder AI. But there are also asymmetric responses, particularly given how goofy and insecure a lot of current AI is. This is the emerging field of counter-autonomy: where and how these systems fracture and break, and what that means for defence. The US Department of Defense defined counter-autonomy back in 2020 as “the comprehensive set of capabilities and TTPs (tactics, techniques, and procedures) that could cause an autonomous system to fail in its intended mission. This could include the more
traditional kinetic destruction of the system, but also efforts to confuse the sensors or poison data, attack via cyber methods, or even efforts to cause the human operator to lose trust in the system.” Any method that degrades an autonomous system falls under this umbrella term.
Counter-autonomy can take the form of direct action against the systems. The deep neural network architecture that most present AI systems employ is highly sensitive to changes, which can make them brittle against adversarial attacks. There’s a large scientific literature out there in which academic researchers at many universities ruthlessly attack AI models looking for how to deceive, manipulate, and otherwise render those models useless. A recent study in Applied Science identified 712 published research papers between 2013 and 2021 on how to attack neural networks. They categorize several types of attacks: those at the training phase such as data or model poisoning attacks; those at the training and testing phase such as backdoor attacks
and Trojan attacks; and those at the testing and inference phase such as adversarial example attacks.
Here's one recent example. Generative AI companies have come under sustained criticism for “scraping” vast repositories of copyrighted material as training data (including the work of many artists and most authors), then hiding behind “fair use” clauses to protect their practices. Data scraping is the process of extracting data from websites or other online sources, without the permission of the data owner. A team at the University of Chicago has developed two tools to help artists fight back. The first, Glaze, is an established tool that allows artists to “mask” their own personal style to prevent it being scraped by AI companies: it changes the pixels of a digital image in subtle ways that are invisible to the human eye but which
manipulate machine-learning models to interpret the image as something other than it is. Their newest tool, Nightshade, is more aggressive, and actively poisons the model if an image treated with it is scraped. A cat becomes a dog, a bat becomes an avocado. Poisoned data samples manipulate models into misidentifying images and hallucinating bad output, and are extremely difficult to remove and fix. They are essentially cognitive landmines planted in anticipation of AI companies “hoovering” up data without permission, a form of cyber guerrilla action.
But there are also indirect forms of counter-autonomy. The latest generative AI models are staggeringly expensive and resource-intensive to train. If we want to use them for defence purposes, where are the secure training datasets going to come from? There are legal and ethical concerns with GPT models that simply “scrape” the entire Internet, or the huge online repositories like Common Crawl, DeviantArt, or Book3, for training content, and the Wild West days of companies doing so with impunity are swiftly ending. The architecture powering ChatGPT required 175 billion parameters (numerical values that indicate how strong the links between neurons in a neural network are), and GPT-4 reportedly uses something on the scale of 1.75 trillion
parameters. The amount of data needed to train networks on this scale is preposterous and almost all of it was scraped and used without permission. The social blowback surrounding privacy and intellectual rights to that data will be fierce. Defence establishments in liberal democracies will not have the luxury of ignoring or sidestepping these pressing societal issues. The legal, ethical, and economic facets of counter-autonomy are only just beginning to be played out.
There are many problems with the current generation of artificial intelligence systems, and an equal number of vectors for approaching counter-autonomy. In the heady rush to embrace defence AI let us not forget that others will be trying very hard to find ways to find or create its weaknesses and break it -- and so should we.
|