top of page

Israel under pressure to justify its use of AI in Gaza

In Washington, however, scrutiny of Israel’s use of AI warfare appears to be low on the priority list.



Israel is deploying new and sophisticated artificial intelligence technologies at a large scale in its offensive in Gaza. And as the civilian death toll mounts, regional human rights groups are asking if Israel’s AI targeting systems have enough guardrails.


It’s a push for accountability that could force Washington to confront some uncomfortable questions about the extent to which the U.S. is letting its ally off the hook for its use of AI-powered warfare.


In its strikes in Gaza, Israel’s military has relied on an AI-enabled system called the Gospel to help determine targets, which have included schools, aid organization offices, places of worship and medical facilities. Hamas officials estimate more than 30,000 Palestinians have been killed in the conflict, including many women and children.


It’s unclear if any of the civilian casualties in Gaza are a direct result of Israel’s use of AI targeting. But activists in the region are demanding more transparency — pointing to the potential errors AI systems can make, and arguing that the fast-paced AI targeting system is what has allowed Israel to barrage large parts of Gaza.


Palestinian digital rights group 7amleh argued in a recent position paper that the use of automated weapons in war “poses the most nefarious threat to Palestinians.” And Israel’s oldest and largest human rights organization, the Association for Civil Rights in Israel, submitted a Freedom of Information request to the Israeli Defense Forces’ legal division in December demanding more transparency on automated targeting.


The Gospel system, which the IDF has given few details on, uses machine learning to quickly parse vast amounts of data to generate potential attack targets.


The Israeli Defense Forces declined to comment on its use of AI-guided bombs in Gaza, or any other usage of AI in the conflict. An IDF spokesperson said in a public statement in February that while the Gospel is used to identify potential targets, the final decision to strike is always made by a human being and approved by at least one other person in the chain of command.



Still, the IDF noted in November that in addition to increasing accuracy, the Gospel system “allows the use of automatic tools to produce targets at a fast pace.” That same statement said that Israel had hit more than 12,000 targets in the first 27 days of combat.


The push for more answers about Israel’s AI warfare has the potential to reverberate in the U.S., creating demands for the U.S. to police tech of its allies abroad and creating a tricky policy area for U.S. lawmakers looking to use AI on future battlefields.


Some who track AI warfare policy in the U.S. argue Israel is distorting the technology’s purpose — using it to expand target lists rather than protect civilians. And, they say, the U.S. should be calling out the IDF for that breach of ethics.


“It’s been clear that Israel has been using AI to have what they call ‘power targets’ so they are using it intentionally — as opposed to what it’s supposed to be, which is helping with precision — to target civilians,” said Nancy Okail, president of progressive foreign policy think tank the Center for International Policy. She said the IDF appears to be allowing for a broad definition of these “power targets” — which the military’s intelligence branch defines as “ targets with security or perception significance to Hamas or the Palestinian Islamic Jihad.”


“With over 30,000 casualties in Gaza, it’s hard to tell if the IDF is using high-tech AI to identify targets or throwing darts at a map,” said Shaan Shaikh, deputy director and fellow with the Missile Defense Project at the Center for Strategic and International Studies. “The U.S. should use its untapped leverage to shape these operations, but so far, the Biden administration has been unwilling to do so.”


Yet so far Israel’s AI use in its military offensive hasn’t caught much attention in Washington discussions of the Israel-Hamas conflict.


Human rights groups stateside say they’re more focused on Israel’s decision to target civilian infrastructure, rather than the technology used to do so. As POLITICO has reported, aid organizations and medical facilities have been struck even after their GPS coordinates were provided to Israeli authorities. And Israel has said it considers civilian infrastructure like hospitals and schools a fair target because Hamas has hidden fighters and weapons in these buildings.


U.S. officials have also largely avoided bringing up Israel’s AI use.


“I’ve been in a number of meetings with folks who are very human rights minded, in meetings with the administration, and I haven’t heard AI come up specifically,” Khaled Elgindy, the program director for Palestine and Palestinian-Israeli Affairs at the Middle East Institute, said.


Asked about the scale of Israel’s use of AI targeting in war, White House Deputy National Security Adviser for Cyber and Emerging Technology Anne Neuberger was quick to pivot to the dangers of the technology in warfare writ large.


“We’re really concerned about AI systems,” Neuberger said in an interview. “That’s why the president worked so quickly to get out his executive order on AI and go from there. Absolutely.”


The executive order she referenced — issued in October — provides guidelines for artificial intelligence use by the U.S. military and intel agencies, while also pushing to counter the weaponization of AI by foreign adversaries.


Another reason the technology may not be getting as much attention in Washington: The opaque nature of Israel’s military operations.


“Nobody has any insight including, I would say U.S. policymakers on how Israel is conducting this war,” said Sarah Yager, the Washington director at Human Rights Watch.



“We see the outcome in the civilian casualties and the destroyed buildings, but in terms of the technology and the proportionality calculus we just have no idea. It’s like a black box.”


But there are signs that Israel may not be employing oversight at the level the U.S. would want. Israel has not signed onto a U.S.-backed treaty pushing for the responsible use of AI in war. Among the more than 50 signatories are the United Kingdom, Germany and Ukraine — the other U.S. ally in an active war.


“My concern is that AI technology has basically opened Pandora’s box,” Ron Reiter, the co-founder and chief technology officer for Israel-based cyber firm Sentra and a former Israeli intelligence officer, said in an interview.


Israel’s AI-targeting system, the Gospel, is used by its elite Unit 8200 cyber and signals intelligence agency to analyze “communications, visuals and information from the internet from mobile networks to understand where people are,” said Reiter, a former Unit 8200 officer.


“8200 provides the signal for what they think in terms of confidence level — how confident they are that the target does not contain civilians,” he said. While Reiter said he could not speak to potential for mistakes by the Gospel, he said that in general the Israeli targeting systems are “very, very accurate and that is done accurately for the sole purpose of minimizing civilian casualties.”


But experts argue that any AI targeting systems carry high risks.


“Given the track record of high error-rates of AI systems,” said Heidy Khlaaf, an engineering director of machine learning at the United Kingdom-based Trail of Bits cybersecurity firm, “Imprecisely and biasedly automating targets is really not far from indiscriminate targeting.”


The U.S. uses AI to help run drone navigation and surveillance in conflict zones, and is developing systems that would use AI-enabled targeting for offensive strikes. The Pentagon published an AI strategy late last year outlining how it plans to safely integrate AI technologies into its operations.


The U.S. is also testing out a variety of AI technologies that have been developed in Israel for use by U.S. forces — some with U.S. funding.


The United States provides Israel $3.3 billion annually in Foreign Military Financing grants, plus $500 million a year for missile defense programs under a decadelong pact on military aid.


Israel has used the funding in part to bolster its high tech war-fighting, anti-rocket and surveillance capabilities — though there are no indications money goes to the Gospel. In addition to strike systems, that includes AI-precision assault rifle sights, such as SMASH from Smart Shooter — which uses advanced image-processing algorithms to hone in on a target. Its use has been recorded in both Gaza and the occupied West Bank.


In October, the Pentagon opened up their own school on drone fighting at Fort Sill in Oklahoma, where its students train with the SMASH add-on.


Some in Congress have moved to push more rules around how the U.S. employs AI tech. Sen. Ed Markey (D-Mass.) and Reps. Ted Lieu (D-Calif.), Donald Beyer (D-Va.) and Ken Buck (R-Colo.) introduced legislation last year that would take steps to ensure AI technologies are not in charge of nuclear command and control systems.


But few are arguing to abandon AI weaponry either in the U.S. or by American allies abroad.

“It’s used to decide targeting and best use of munitions and can be used, frankly, to try to minimize or avoid civilian casualties and unintended consequences,” Senate Intelligence Committee Chair Marco Rubio (R-Fla.) said.


Andrew Borene, executive director of global security at Flashpoint and former senior staff officer in counterterrorism at the Office of the Director of National Intelligence, argued that today’s wars are by default the test sites for figuring out what AI tech can work in warfare and what the rules should be.


“These events in Ukraine, as well as Gaza,” he said, “Are like laboratories that are sadly doing some of the tragic, kinetic work needed to help us develop better AI policy and ethical considerations on a global level.”


Matt Berg contributed to this report.


 

Politico, 2024

Comments


Featured Review
Tag Cloud
bottom of page