Microsoft Will Not Provide Facial Recognition Software to California

Posted on April 17, 2019 by Paul Thurrott in Microsoft with 19 Comments

Citing human rights concerns, Microsoft announced that it will not provide facial recognition software to the state of California.

The announcement was made by Microsoft president Brad Smith, who was speaking a Stanford University conference on human-centered artificial intelligence.” It was first reported by Reuters, which notes that Microsoft concluded that use of the software would lead to innocent women and minorities being disproportionately held for questioning because AI has been trained on mostly white and male pictures.

“Anytime [a police officer] pulled anyone over, they wanted to run a face scan” against a database of suspects, Smith said at the event. After thinking through the uneven impact, “we said this technology is not your answer.”

Smith also said that Microsoft rejected a deal in which it would have supplied facial recognition software for cameras installed all over an unnamed city (most likely in China, given that the nonprofit Freedom House had deemed this country “not free”). Smith said the use of this software would have “suppressed freedom of assembly” in that city.

Microsoft is, of course, interested in selling its technology. And Smith pointed to one example of its facial recognition software being used in an acceptable way: It is being deployed in an unnamed U.S.-based prison because “the environment would be limited and … it would improve safety inside the” prison.

For the past year, Microsoft has been pushing for ethics in AI, and it more recently revealed that shortcoming in its facial recognition capabilities required that potential customers be transparent about how they would use this technology. Microsoft would like this technology to be regulated.

Tagged with

Join the discussion!


Don't have a login but want to join the conversation? Become a Thurrott Premium or Basic User to participate

Comments (19)

19 responses to “Microsoft Will Not Provide Facial Recognition Software to California”

  1. Daekar

    I think the major tech companies are incapable of seeing the world through any lens except identity politics anymore, it's insane. Microsoft has made the right choice for the wrong reason. Widespread government facial recognition would be bad for ALL humans, gender or skin color be damned.

    I'm glad to see that they appreciate how abusable the technology is and would like to see it regulated.

  2. igor engelen

    What bothers me most is that they seem to be very aware of the fact that they are training their AI with incomplete data. Why on earth would you do that?

    • wright_is

      In reply to Igor Engelen:

      Because you need live people to test this on and you need to get them to sign a waiver to have their faces used for recognition - you can slam in public domain photos for the database, but you still need real people to test on. So you start with your development team, and people on campus and work out.

      If your team is largely white males, then the bias starts straight away. And then they pull in their mates for further tests...

      They should be setting out the test plan before they start and see to it that they get a wide spread of "minorities", in the sample database, there shouldn't be any "minorities", there should be equal numbers of people from all ethnic roots.

      But, generally, people just take what is at hand and if you are working and living in a white dominated area, you will have a dominance of white candidates to test against. It is just easier to grab people in the corridor or off the street than it is to search for a balanced test group.

      If they were doing the research in China or Japan, you'd probably find that it has a good rate of success with oriental faces, but poor with African, Hispanic, Caucasian etc. faces.

      I live in a small town in rural Germany. I'd guess 70% German, 20% East European, 9% Turkish and 1% "other" and me, a white Englishman fall in the "other" category. Doing the testing here would be very difficult to get a good racial spread.

      Obviously, I have no idea about the actual make-up of the population around the research facility where MS developed this, so it is only a hypothesis.

    • lvthunder

      In reply to Igor Engelen:

      Why would you do what? Tell the truth. Imagine if they would of went through with this deal and California determined it was discriminating women and minorities. I would bet the liberals politicians in California would sue.

  3. jim_may

    Microsoft won't give it to California, but they did give it to Commie China -

    Look it up!

  4. chrisrut

    I think MS's point is that this they are selling a programmable/configurable/teachable system, and therefore no better that the entity doing the programming, configuration, and teaching.

    The mantra bears repeating here: you will know when AI has "arrived" when computers start doing things that are really stupid.

  5. AnOldAmigaUser

    Anyone complaining about this being a PC move is likely to be someone who stands next to no chance of being affected by a false positive and put through the ringer. If you think there is no bias in law enforcement, drive I95 down the east coast from NY to Miami, and pay attention to the cars that are pulled over.

    The real problem here is that another company will sell the technology, and it will be equally flawed. It is not called "Artificial" intelligence for no reason. People tend to think that any information coming from a computer is accurate, despite the fact that much of it is entered or collected by people, and manipulated by algorithms created by people; all of them at least as stupid as us.

  6. Mark from CO


    I agree with the other comments below. I would also add this is an admission that this AI product is unfinished and doesn’t work well. This leads to a couple of other obvious questions:

    1.      Why is Microsoft selling the software to others – it contains the same significant flaw/bug?

    2.      Are all Microsoft’s AI products equally flawed?

  7. Pierre Masse

    It means that they will provide it as soon as the software has improved.

  8. Stooks

    How long will that actually last? If CA is taking this step then they should not use finger prints or DNA either.

    • MachineGunJohn

      In reply to Stooks:

      That's just pure stupidity. It's not like they weren't going to physically take a long careful look at the person to double check the match. If anything this would have helped them avoid detaining people mistakenly. Brad and the other ultra libs need to be forced out for financial malfeasance. If the board doesn't get this crap in check there will be a shareholder lawsuit coming.

      • lvthunder

        In reply to MachineGunJohn:

        I believe they did this because they knew the software would cause California to sue them for selling them software that discriminates women and minorities. That lawsuit wouldn't be pretty.

        • MachineGunJohn

          In reply to lvthunder:

          this isn't just about California,that's just the tip of the iceberg. And notice nowhere was there any transparency about what the alleged discrepancy in accuracy is. This issue was raised years ago. It doesn't take that long to add another couple hundred thousand diverse photos to your database and retrain your ai. I doubt it's off by any statistically significant amount at this point. If I was an MS employee working on this I be timing. Again California wasn't going to detain anyone based on an ai match here and this was clearly more likely to prevent a mistaken identity than cause one. This was an entirely political decision not a technical one, and it has clear negative impact on MS shareholders. Others now will avoid MS to avoid any false public misconstruing of their policies. Letting your political bias outweigh facts is intellectually dishonest. Doing it to the point where it negatively affects revenue is reason to boot him out.

      • skane2600

        In reply to MachineGunJohn:

        You're dreaming. Shareholder's rights are a legit issue but the political leanings of the board wouldn't even make the top 10 list of their concerns. If they didn't sue because of the Windows 8 fiasco or the ill-fated acquisition of Nokia, they wouldn't sue over such a relatively minor business opportunity.

  9. glenn8878

    Microsoft doesn't own the market for facial recognition software. Governments will just go elsewhere to buy it. It's awfully odd to admit that their software is faulty so it will cause arrests because they are not white and male. Usually, pictures are matched against a database so if a picture doesn't exist, there's no match. Governments are already on the hook for disproportionate arrests of minorities. Until the bridge is crossed, no one will know the impact and no improvements can be made. Technology will still march ahead without Microsoft.

  10. lwetzel

    "Microsoft would like this technology to be regulated."

    And they are correct and it needed to be done 10 years ago. However it won't stop governments from creating their own software. Things are getting 1984ish!

    • chrisrut

      In reply to lwetzel:

      I brought the positive interpretation of the term "cutting edge" into the vernacular with a marketing campaign for Epson back in the 1980s, but I've never forgotten the sting of its original meaning. Unintended consequences baby, unintended consequences...

      Social policy ALWAYS lags technology. Twas so in the age of Morse when electricity first emerged from Leyden jars, and it's true now as we seek ways to extend intelligence beyond the limits of our skulls. To set policy the unintended (and largely unforeseeable) consequences of technology must first be felt.