OpenAI has claimed that it’s “leading the way” when it comes to the safe, ethical deployment of artificial intelligence. Weirdly, it has also decided to partner with a company that is actively working to develop killer robots for the U.S. military.
This week, OpenAI announced a new partnership with Anduril Industries, a defense contractor co-founded by Oculus founder Palmer Luckey. Luckey’s little company has managed, in the space of seven years, to build itself into a pivotal player in the defense community. It has done that by churning out drones for the U.S. military, some of which are designed to kill people.
The new partnership between the drone builder and Silicon Valley’s hottest AI vendor will see the two companies come together to “develop and responsibly deploy advanced artificial intelligence (AI) solutions for national security missions,” a press release associated with the deal states. What that means, practically speaking, is the integration of OpenAI’s software into Anduril’s platform, Lattice. Lattice is a flexible, AI-fueled software program, designed to serve a variety of defense needs. It appears that OpenAI’s high-powered algorithms will now be used to turbocharge Anduril’s product.
“OpenAI builds AI to benefit as many people as possible, and supports U.S.-led efforts to ensure the technology upholds democratic values,” said OpenAI’s CEO, Sam Altman, in a statement shared Wednesday. “Our partnership with Anduril will help ensure OpenAI technology protects U.S. military personnel, and will help the national security community understand and responsibly use this technology to keep our citizens safe and free.”
In a statement, Anduril’s CEO and co-founder, Brian Schimpf, said the partnership would allow his company to utilize OpenAI’s “world-class expertise in artificial intelligence to address urgent Air Defense capability gaps across the world.”
Although most of Anduril’s products represent defensive technologies designed to protect U.S. service members and vehicles, it also sells what has been dubbed a “Kamikaze” drone. That drone, the Bolt-M, is powered by the company’s artificial intelligence software and comes equipped with “lethal precision firepower,” which can deliver “devastating effects against static or moving ground-based targets,” the company’s website brags. LiveScience notes that the Bolt-M is designed to fly into structures and explode. Anduril is also said to be developing “drone swarms” that can augment U.S. Navy missions.
This is a weird, if not predictable, development for OpenAI, which has claimed it wants to steward AI’s development in a healthy direction but has, since its ascent to the heights of the tech industry, increasingly dispensed with the ethical guardrails that defined its early development.