Israel unleashes AI to pinpoint targets in Gaza bombings.

TLDR:

Israel’s military used an AI tool called Lavender to identify bombing targets in Gaza, marking 37,000 Palestinians as suspected Hamas operatives. The system sacrificed accuracy for speed, resulting in thousands of civilian casualties. The Lavender system was trained to identify features associated with Hamas operatives and targeted individuals based on similarity to known operatives. Civilian casualties were allowed based on a ratio of up to 20 civilians for every lower-level Hamas operative targeted. The use of AI technology in warfare has raised concerns about mass surveillance and killings of civilians.

Article:

Israel’s military has been using artificial intelligence to help choose its bombing targets in Gaza, according to an investigation by Israel-based publications +972 Magazine and Local Call. The AI tool, Lavender, was developed after Hamas’ attacks in Gaza and marked 37,000 Palestinians as suspected “Hamas militants” for targeted assassinations. The Lavender system used features associated with Hamas operatives to rank individuals in Gaza and determine strike targets.

Israeli intelligence officers were not required to conduct independent examinations of Lavender targets before bombing them, serving as a “rubber stamp” for the machine’s decisions. The system had a 90 percent accuracy rate, resulting in civilian casualties due to mistaken identities or loose affiliations with Hamas. During the war, the military allowed collateral civilian casualties based on the target’s level within Hamas, resulting in hundreds of civilian deaths.

The use of AI-driven warfare in Gaza has raised concerns about mass surveillance and killings of civilians. The development and deployment of AI tools for targeting suspected Hamas operatives have led to civilian casualties and backlash from human rights advocates. The weaponization of surveillance technologies and AI in warfare poses ethical dilemmas and challenges regarding the protection of civilian lives.

Mona Shtaya, a non-resident fellow at the Tahrir Institute for Middle East Policy, highlighted the troubling implications of AI-driven warfare in Palestine and the potential export of Israeli defense technologies abroad. The use of facial recognition and AI systems in targeting suspected Hamas operatives raises questions about the ethics and legality of using advanced technologies for military purposes.