Microsoft bans US police forces from using corporate AI tools

4 Min Read

Microsoft has its policy to ban U.S. law enforcement agencies from using generative AI through the Azure OpenAI Service, the company’s fully managed, enterprise-focused wrapper around OpenAI technologies.

Language added Wednesday to the terms of service for Azure OpenAI Service prohibits integrations with Azure OpenAI Service from being used “by or for” police departments in the US, including integrations with OpenAI’s text and speech analytics models.

A separate new point covers “any law enforcement worldwide” and explicitly prohibits the use of “real-time facial recognition technology” on mobile cameras, such as body cameras and dashcams, to attempt to identify a person in “unmonitored, in-the-environment” situations. -wild” environments.

The terms changes come a week after Axon, a maker of technology and weapons products for military and law enforcement, made an announcement new product which uses OpenAI’s generative text model GPT-4 to summarize audio from body cameras. Critics were quick to point out the potential pitfalls, such as hallucinations (even the best generative AI models make up facts these days) and racial biases arising from the training data (which is especially concerning given that people of color They are much more likely to be stopped by the police than their white peers).

It is unclear whether Axon was using GPT-4 through Azure OpenAI Service, and if so, whether the updated policy was in response to Axon’s product launch. Open AI had previously limited using its facial recognition models through its APIs. We’ve reached out to Axon, Microsoft, and OpenAI and will update this post if we hear back.

See also  A sophisticated AitM-enabled implant evolving since 2005

The new conditions leave Microsoft room to maneuver.

The complete ban on using the Azure OpenAI service applies to the US only, not international, police. And it does not cover facial recognition that is performed stationary cameras inside checked environments, such as a back office (although the terms prohibit any use of facial recognition by US police).

That aligns with Microsoft and close partner OpenAI’s recent approach to AI-related law enforcement and defense contracts.

In January, Bloomberg reported revealed that OpenAI is working with the Pentagon on a number of projects, including cybersecurity capabilities — a departure from the startup’s previous ban on providing its AI to militaries. Elsewhere, Microsoft has made a pitch using OpenAI’s image generation tool, DALL-E, to help the Department of Defense (DoD) build software to conduct military operations. per The interception.

Azure OpenAI Service became available in Microsoft’s Azure Government product in February, adding additional compliance and management features aimed at government agencies, including law enforcement. In a blog postCandice Ling, SVP of Microsoft’s government-focused division Microsoft Federal, promised that the Azure OpenAI Service would be submitted to the Department of Defense “for additional authorization” for workloads supporting DoD missions.

Microsoft and OpenAI did not immediately return requests for comment.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *