Citing human rights concerns, Microsoft announced that it will not provide facial recognition software to the state of California.
The announcement was made by Microsoft president Brad Smith, who was speaking a Stanford University conference on human-centered artificial intelligence.” It was first reported by Reuters, which notes that Microsoft concluded that use of the software would lead to innocent women and minorities being disproportionately held for questioning because AI has been trained on mostly white and male pictures.
“Anytime [a police officer] pulled anyone over, they wanted to run a face scan” against a database of suspects, Smith said at the event. After thinking through the uneven impact, “we said this technology is not your answer.”
Smith also said that Microsoft rejected a deal in which it would have supplied facial recognition software for cameras installed all over an unnamed city (most likely in China, given that the nonprofit Freedom House had deemed this country “not free”). Smith said the use of this software would have “suppressed freedom of assembly” in that city.
Microsoft is, of course, interested in selling its technology. And Smith pointed to one example of its facial recognition software being used in an acceptable way: It is being deployed in an unnamed U.S.-based prison because “the environment would be limited and … it would improve safety inside the” prison.
For the past year, Microsoft has been pushing for ethics in AI, and it more recently revealed that shortcoming in its facial recognition capabilities required that potential customers be transparent about how they would use this technology. Microsoft would like this technology to be regulated.
Tagged with AI