Share
Your Sci-Tech newsletter from Robert Engen and Wavell Room. This week: Counter-Autonomy
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

View this email in your browser

Image: Edge of Defence Header Logo

Issue 19 21 Nov 2023 | Edited by Robert Engen

We need to learn how to find and create weaknesses in autonomous systems.

Image: Credit Wall Street Journal. A destroyed Russian drone on the streets of Ukraine, October 2022.
A destroyed Russian drone on the streets of Ukraine, October 2022. Image Credit. The Wall Street Journal

Counter-Autonomy: An Early Snapshot

During the Second World War, philosopher Theodor Adorno foresaw that the mechanical death Nazi Germany rained down upon England – the V1 and V2 rockets – marked a significant break with past forms of warfare. “Hitler’s robot-bombs,” he wrote in his book Minima Moralia, “like Fascism itself … career without an object. Like it, they combine utmost technical precision with total blindness. And like it they arouse terror and are wholly futile.”


Eighty years later, the robots are everywhere, and we are slowly rediscovering the terror. Artificial intelligence shows no signs of ushering in the promised social utopia, nor of meaningfully improved the quality of life for anyone beyond the billionaire owners of the large language models. Many of the most obvious applications for AI are to surveil, control, target, or kill. Defence professionals had best learn to live and fight in a world that includes artificial intelligence as one of its primary problems.


The symmetrical response to artificial intelligence will be to acquire bigger, badder AI. But there are also asymmetric responses, particularly given how goofy and insecure a lot of current AI is. This is the emerging field of counter-autonomy: where and how these systems fracture and break, and what that means for defence. The US Department of Defense defined counter-autonomy back in 2020 as “the comprehensive set of capabilities and TTPs (tactics, techniques, and procedures) that could cause an autonomous system to fail in its intended mission. This could include the more traditional kinetic destruction of the system, but also efforts to confuse the sensors or poison data, attack via cyber methods, or even efforts to cause the human operator to lose trust in the system.” Any method that degrades an autonomous system falls under this umbrella term.


Counter-autonomy can take the form of direct action against the systems. The deep neural network architecture that most present AI systems employ is highly sensitive to changes, which can make them brittle against adversarial attacks. There’s a large scientific literature out there in which academic researchers at many universities ruthlessly attack AI models looking for how to deceive, manipulate, and otherwise render those models useless. A recent study in Applied Science identified 712 published research papers between 2013 and 2021 on how to attack neural networks. They categorize several types of attacks: those at the training phase such as data or model poisoning attacks; those at the training and testing phase such as backdoor attacks and Trojan attacks; and those at the testing and inference phase such as adversarial example attacks.


Here's one recent example. Generative AI companies have come under sustained criticism for “scraping” vast repositories of copyrighted material as training data (including the work of many artists and most authors), then hiding behind “fair use” clauses to protect their practices. Data scraping is the process of extracting data from websites or other online sources, without the permission of the data owner. A team at the University of Chicago has developed two tools to help artists fight back. The first, Glaze, is an established tool that allows artists to “mask” their own personal style to prevent it being scraped by AI companies: it changes the pixels of a digital image in subtle ways that are invisible to the human eye but which manipulate machine-learning models to interpret the image as something other than it is. Their newest tool, Nightshade, is more aggressive, and actively poisons the model if an image treated with it is scraped. A cat becomes a dog, a bat becomes an avocado. Poisoned data samples manipulate models into misidentifying images and hallucinating bad output, and are extremely difficult to remove and fix. They are essentially cognitive landmines planted in anticipation of AI companies “hoovering” up data without permission, a form of cyber guerrilla action.


But there are also indirect forms of counter-autonomy. The latest generative AI models are staggeringly expensive and resource-intensive to train. If we want to use them for defence purposes, where are the secure training datasets going to come from? There are legal and ethical concerns with GPT models that simply “scrape” the entire Internet, or the huge online repositories like Common Crawl, DeviantArt, or Book3, for training content, and the Wild West days of companies doing so with impunity are swiftly ending. The architecture powering ChatGPT required 175 billion parameters (numerical values that indicate how strong the links between neurons in a neural network are), and GPT-4 reportedly uses something on the scale of 1.75 trillion parameters. The amount of data needed to train networks on this scale is preposterous and almost all of it was scraped and used without permission. The social blowback surrounding privacy and intellectual rights to that data will be fierce. Defence establishments in liberal democracies will not have the luxury of ignoring or sidestepping these pressing societal issues. The legal, ethical, and economic facets of counter-autonomy are only just beginning to be played out.


There are many problems with the current generation of artificial intelligence systems, and an equal number of vectors for approaching counter-autonomy. In the heady rush to embrace defence AI let us not forget that others will be trying very hard to find ways to find or create its weaknesses and break it -- and so should we.


IN BRIEF.

Image: Credit Karla Ortiz @kortizart. A piece of artwork protected by GLAZE.

GLAZE Countermeasure System


The University of Chicago's SAND Lab has developed its GLAZE countermeasures against data scraping for AI training purposes, which point towards many of the coming struggles against autonomy.

Read more.

Image: Credit DAIO. The Defence AI Observatory cover image.

Defence AI Observatory (DAIO)


The DAIO at Helmut Schmidt University in Hamburg analyses the use of AI by armed forces, including contemporary studies of how most major countries are employing defence AI.
Read more.

Image: Credit Defence Connect. An NVIDIA GPU.

AUKUS Pillar 2


Going beyond submarines, Pillar 2 of the AUKUS agreement is about building joint advanced capabilities, and will require big educational commitments by the three partner nations.

Read more.

Do you have tech or science material you want us to cover?  Reach out through our contact form here.