Don’t miss the latest developments in business and finance.

Used AI to help find targets for airstrikes in West Asia, says Pentagon

Machine learning algorithms that can teach themselves to identify objects helped to narrow down targets for more than 85 US air strikes on Feb. 2

Fire, smoke, airstrike, rockets, Israel- Palestine, Gaza, Hamas
The military has previously acknowledged using computer vision algorithms for intelligence purposes (Representative image)
Bloomberg
4 min read Last Updated : Feb 26 2024 | 8:56 PM IST
By Katrina Manson


The US used artificial intelligence to identify targets hit by air strikes in the Middle East this month, a defense official said, revealing growing military use of the technology for combat.
 
Machine learning algorithms that can teach themselves to identify objects helped to narrow down targets for more than 85 US air strikes on Feb. 2, according to Schuyler Moore, chief technology officer for US Central Command, which runs US military operations in the Middle East. The Pentagon said those strikes were conducted by US bombers and fighter aircraft against seven facilities in Iraq and Syria.

“We’ve been using computer vision to identify where there might be threats,” Moore said in an interview with Bloomberg News. “We’ve certainly had more opportunities to target in the last 60 to 90 days,” she said, adding the US is currently looking for “an awful lot” of rocket launchers from hostile forces in the region.

The military has previously acknowledged using computer vision algorithms for intelligence purposes. But Moore’s comments mark the strongest known confirmation that the US military is using the technology to identify enemy targets that were subsequently hit by weapons’ fire. 

The US strikes, which the Pentagon said destroyed or damaged rockets, missiles, drone storage and militia operations centers among other targets, were part of the Biden administration’s response to the killing of three US service members in a Jan. 28 attack against a base in Jordan. The US attributed the attack to Iranian-backed militias. 

Moore said AI systems have also helped identify rocket launchers in Yemen and surface vessels in the Red Sea, several of which Central Command, or Centcom, said it has destroyed in multiple weapons strikes during February. Iran-supported Houthi militias in Yemen have repeatedly targeted commercial shipping in the Red Sea with rocket attacks.

Also Read


Project Maven
 
The targeting algorithms were developed under Project Maven, a Pentagon initiative started in 2017 to accelerate the adoption of AI and machine learning throughout the Defense Department and to support defense intelligence, with emphasis in prototypes at the time on the US fight against Islamic State militants. 

Moore, who’s based at Centcom headquarters in Tampa, Florida, said US forces in the Middle East have experimented with computer vision algorithms that can locate and identify targets from imagery captured by satellite and other data sources, trying them out in exercises over the past year.

Then they began using them in actual operations in the aftermath of the Oct. 7 attack by Hamas against Israel and the retaliatory military action that followed in Gaza, which inflamed regional tensions and attacks by Iranian-backed militants. The US and European Union have designated Hamas a terrorist organization.

“October 7th everything changed,” Moore said. “We immediately shifted into high gear and a much higher operational tempo than we had previously,” she said, adding US forces were able to make “a pretty seamless shift” into using Maven after a year of digital exercises. 

Moore emphasized that Maven’s AI capabilities are being used to help find potential targets but not to verify them or deploy weapons against them.

She said exercises late last year, in which Centcom experimented with an AI recommendation engine, showed such systems “frequently fell short” of humans in proposing the order of attack or the best weapon to use.

Humans constantly check the AI targeting recommendations, she said. US operators take seriously their responsibilities and the risk that AI could make mistakes, she said, and “it tends to be pretty obvious when something is off.”

“There is never an algorithm that’s just running, coming to a conclusion and then pushing onto the next step,” she said. “Every step that involves AI has a human checking in at the end.”

More From This Section

Topics :United StatesWest AsiaUS airstrikes Defence

First Published: Feb 26 2024 | 8:56 PM IST

Next Story