close
close

Special forces want to use AI to reduce the number of civilian casualties in combat


Special forces want to use AI to reduce the number of civilian casualties in combat

While there has been much talk about the danger of using AI in military operations so that it could kill people, there has been far less discussion about using AI to make wars safer for civilians. Yet that is exactly what U.S. special forces are now beginning to consider, Christopher Maier, assistant secretary of defense for special operations and low-intensity conflict, told reporters on Friday.

One of the reasons for this is that in a large-scale conflict – such as a possible war with China – it is far more difficult to prevent harm to the civilian population than in the anti-terror missions in which special forces participate around the globe.

“When we started practicing this and putting the emphasis on (reducing) civilian harm in large-scale exercises, it became particularly disheartening when you think about the scale of this type of conflict where … we were openly talking about thousands of attacks in an hour. That’s unfathomable,” he told the Defense Writers Group.

US special forces “will need automation and aspects of artificial intelligence and machine learning and all the things we’re constantly talking about on the targeting and operational side built in and integrated, with a focus on harming civilians.”

The Defense Department is already doing a lot to reduce the number of civilian casualties, especially in special operations, he said. One example of this is the new Center of Excellence for the Protection of Civilians in Combat Operations.

“It also includes things that are critical but not particularly glamorous, like having a data company that can take a lot of different information and make it available to others so they can look at the lessons of the past,” he said.

So how realistic is it to use AI to reduce harm to civilians in conflict?

A 2021 report by the International Red Cross examined some areas where AI, particularly when combined with more precise targeting and better analysis of battlefield data, could make conflict safer for civilians and non-combatants. Such systems “could enable people to make better decisions in the conduct of hostilities in accordance with international humanitarian law and minimize risks to civilians by enabling faster and more comprehensive collection and analysis of available information,” it said.

But the report also reveals a number of features that AI will bring to the battlefield that could be attractive to commanders – but which could also undermine efforts to protect civilians and potentially “facilitate poorer decisions or violations of international humanitarian law and exacerbate risks to civilians, particularly given the current limitations of the technology such as unpredictability, lack of explainability and bias.”

AI could also lead to what the Red Cross calls “increasing personalization of warfare”: digital systems would combine personally identifiable information from different sources – including sensors, communications systems, databases, social media and biometric data – to make algorithmically generated statements about a person, their status and targeting ability, or to predict their future actions.”

This may already have happened. In April, Israeli magazine +972, citing multiple sources within the Israeli military, reported in detail on the existence of an AI tool called “Lavender” that is used to identify suspected Hamas and Palestinian Islamic Jihad fighters. According to the magazine, “during the early stages of the war, the army gave officers broad approval to adopt Lavender’s kill lists without having to thoroughly examine why the machine was making these decisions or examine the raw intelligence data on which those decisions were based.”

Bottom line: The use of AI in warfare to prevent civilian harm is only as good as the human-defined parameters that guide it. And those parameters reflect the intentions and priorities of the government using the system.

Nevertheless, if used correctly, AI can have positive effects on harm reduction among civilians, according to a 2022 CNA paper.

For example: “Detecting a change in the estimate of collateral damage by detecting differences

between the images used to determine the collateral damage estimate and more recent

Images taken in support of an operation” and “Warning of the presence of temporarily present civilians by using object identification to automatically search for additional persons in the vicinity of the target area and send an alert if detected.”

In other words, AI could play a critical role in reducing target uncertainty, allowing commanders to better identify which targets to shoot at – and which not to.

Of course, the CNA paper reminds us, there are reasonable limits because AI is based on data and not all data is perfect at the time of processing.

“Even in a perfect world – a world with few or no uncertainties, a clear demarcation between ‘hostile’ and ‘non-hostile’, and a world in which target areas (and associated weapons blast zones) are easily identified, eliminating any reasonable likelihood of collateral damage – there is a non-zero risk to civilians.”

Giving special operations forces better tools to prevent civilian casualties is part of a larger set of changes that Maier said are essential to being more competitive with China on the world stage. These are changes that special operations forces must make even as they face budget constraints and even cuts.

For example, the army wants to cut up to 3,000 special forces. Army officials talk about the background to Defense One stressed in September that the cuts would affect non-tactical roles or so-called enablers such as headquarters staff, logistics and psychological operations.

However, Maier said that US special forces need to invest in precisely these types of support roles in order to compete with China.

“If you have an Operation Detachment Alpha, which is a kind of core team of 12 Green Beret fighters, they need to know how to conduct cyber attacks and target a possible enemy satellite. They also need to know how to operate in an environment of ubiquitous technological surveillance. Likewise, they need to be able to hit the target they’re trying to hit 10 times out of 10 when they’re kinetic,” he said. “My general view is that these are critical factors where we need to invest the most. In some cases, that means making the trigger pullers experts who can do that.”

Leave a Reply

Your email address will not be published. Required fields are marked *