



TEL AVIV, Israel >> U.S. tech giants have quietly empowered Israel to track and kill many more alleged fighters more quickly in Gaza and Lebanon through a sharp spike in artificial intelligence and computing services. But the number of civilians killed has also soared, along with fears that these tools are contributing to the deaths of innocent people.
Militaries have for years hired private companies to build custom autonomous weapons. However, Israel’s recent wars mark a leading instance in which commercial AI models made in the United States have been used in active warfare, despite concerns that they were not originally developed to help decide who lives and who dies.
The Israeli military uses AI to sift through vast troves of intelligence, intercepted communications and surveillance to find suspicious speech or behavior and learn the movements of its enemies. After a surprise attack by Hamas fighters on Oct. 7, 2023, its use of Microsoft and OpenAI technology skyrocketed, an Associated Press investigation found.
The investigation also revealed new details of how AI systems select targets and ways they can go wrong, including faulty data or flawed algorithms. It was based on internal documents, data and exclusive interviews with current and former Israeli officials and company employees.
Israel’s goal after the attack that killed about 1,200 people and took over 250 hostages was to eradicate Hamas, and its military has called AI a “game changer” in yielding targets more swiftly. Since the war started, more than 50,000 people have died in Gaza and Lebanon and nearly 70% of the buildings in Gaza have been devastated, according to health ministries in Gaza and Lebanon.
“This is the first confirmation we have gotten that commercial AI models are directly being used in warfare,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute and former senior safety engineer at OpenAI. “The implications are enormous for the role of tech in enabling this type of unethical and unlawful warfare going forward.”
Among U.S. tech firms, Microsoft has had an especially close relationship with the Israeli military spanning decades.
That relationship, alongside those with other tech companies, stepped up after the Hamas attack. Israel’s war response strained its own servers and increased its reliance on outside, third-party vendors, according to a presentation last year by Col. Racheli Dembinsky, the military’s top information technology officer. As she described how AI had provided Israel “very significant operational effectiveness” in Gaza, the logos of Microsoft Azure, Google Cloud and Amazon Web Services appeared on a large screen behind her.
The Israeli military’s usage of Microsoft and OpenAI artificial intelligence spiked last March to nearly 200 times higher than before the week leading up to the Oct. 7 attack, the AP found in reviewing internal company information. The amount of data it stored on Microsoft servers doubled between that time and July 2024 to more than 13.6 petabytes — roughly 350 times the digital memory needed to store every book in the Library of Congress. Usage of Microsoft’s huge banks of computer servers by the military also rose by almost two-thirds in the first two months of the war alone.
Microsoft declined to provide any comment for this story and did not respond to a detailed list of written questions about the cloud and AI services it provides to the Israeli military.
In an expansive statement on its website, the company says “respecting human rights is a core value of Microsoft” and it is committed “to champion the positive role of technology across the globe.” In its 40-page Responsible AI Transparency Report for 2024, Microsoft pledges to “map, measure, and manage generative AI risks throughout the development cycle to reduce the risk of harm,” and does not mention its lucrative military contracts.
Advanced AI models are provided through OpenAI, the maker of ChatGPT, through Microsoft’s Azure cloud platform, where they are purchased by the Israeli military, the documents and data show. Microsoft has been OpenAI’s largest investor.