Microsoft Roadblocks Police Use Of Its AI Tool For Facial Recognition
Language recently added by Microsoft reaffirms that its Azure OpenAI Service is not open for law enforcement for facial recognition. The added bullet points also state that law enforcement may not use included integrations with OpenAI’s current, and most likely future, image-analyzing models. At the center of all of this is citizens' collective concern that AI will not only be used to track their every move, but also be prone to hallucinations. Those could include racial bias, or improperly identifying a suspect.
Microsoft’s newly added terminology states that its AI tool may not “be used for facial recognition purposes by or for a police department in the United States; or be used for any real-time facial recognition technology on mobile cameras used by any law enforcement globally to attempt to identify individual in uncontrolled, 'in the wild' environments, which includes (without limitation) police officers on patrol using body-worn or dash-mounted cameras using facial recognition technology to attempt to identify individuals present in a database of suspects or prior inmates.”
Microsoft’s added terminology follows the news of Axon, a company that makes weapons and products for both military and law enforcement, announcing its own use of OpenAI’s GPT-4 generative text model to provide summaries of audio captured from a body cam worn by a law enforcement officer, or military personnel.
The White House also recently announced a policy on AI that includes provisions stating federal agencies must provide clear opt-out options for technologies like facial recognition. The point of the new provisions is to ensure that individuals have the option to choose an alternative method of identification other than facial recognition.
Regardless of any safeguards, AI will always be used for nefarious purposes by someone. It is nice, however, to know that companies like Microsoft are at least attempting to keep AI in check. It is also important to remember, however, governments around the world can still create their own AI-enabled devices and software to be used, and most likely have, to do the same thing.