YouTube Wants to Address Misinformation Coming From ‘Borderline’ Videos

Posted on February 18, 2022 by Laurent Giret in Social, YouTube with 35 Comments

YouTube is exploring new measures to prevent videos that include misinformation to spread on the platform. Neal Mohan, the Chief Product Officer of YouTube detailed the challenges posed by “borderline” videos that don’t violate YouTube’s CGUs but can still be responsible for making misinformation go viral.

Being one of the rare Internet platforms with billions of users, YouTube has been trying to keep a balance between free expression and its responsibility to remove fake news and problematic content. “We faced these challenges early on in the COVID-19 pandemic, such as when a conspiracy theory that 5G towers caused the spread of coronavirus led to people burning down cell towers in the UK. Due to the clear risk of real-world harm, we responded by updating our guidelines and making this type of content violative,” explained Mohan.

However, what the YouTube exec sees as “borderline” videos are starting to become a big issue for YouTube. “These are videos that don’t quite cross the line of our policies for removal but that we don’t necessarily want to recommend to people,” explained the YouTube CEO. If the company can update its algorithms to stop recommending borderline videos to users, these videos can still get some views if they’re being shared or embedded on websites.

This is where things start to get really complicated for YouTube. As of today, Mohan said that YouTube isn’t ready to go as far as disabling the share button or breaking the link on borderline videos, because “preventing shares may go too far in restricting a viewer’s freedoms.” As an alternative, YouTube is considering adding interstitials to borderline videos that would let viewers know before they start watching that the video may include misinformation.

“We’ll continue to carefully explore different options to make sure we limit the spread of harmful misinformation across the internet,” the YouTube CEO said. This is definitely a complex problem for the platform which operates in over 100 countries with very different cultures. “Beyond growing our teams with even more people who understand the regional nuances entwined with misinformation, we’re exploring further investments in partnerships with experts and non-governmental organizations around the world,” Mohan also announced.

Tagged with

Join the discussion!

BECOME A THURROTT MEMBER:

Don't have a login but want to join the conversation? Become a Thurrott Premium or Basic User to participate

Register
Comments (35)

35 responses to “YouTube Wants to Address Misinformation Coming From ‘Borderline’ Videos”

  1. trparky

    Who made them the arbiters of what is truth and what is... not? Go home YouTube, you're drunk.

    • jimchamplin

      I believe they did. They own the service, so they get to decide the rules for its use. Same as how the owner of a building gets to decide who comes in and who doesn’t.


      These people are free to believe and say whatever they wish, which makes them liable for all of the consequences and repercussions of their speech. If spouting made up drivel and pure woo makes the public sick of hearing them, well, that’s too bad.

      • CasualAdventurer

        Actually they don't have the power to decide what stays on their platform and what doesn't, not if they want to enjoy Section 230 protections. According to S230 once they start editing content they become a publisher and not an open platform, meaning they can be sued for violations of free speech. The only items they are allowed to remove legally are calls to violence. Most of what they call misinformation is really opinion, and the first amendment guarantees that right to all Americans in all public squares. As long as YouTube doesn't remove opinion pieces they are a public square, but when they do remove them they become a publisher and lose their protection. At least that is how the law is written, if not enforced.

        • davidjhupp

          This is technically incorrect. First, Section 230 allows for moderation, but it is not only limited to moderating calls to violence. The line past which the website becomes the "publisher" is a bit blurry, but all of the actual case law indicates that this kind of moderation does not threaten Google's Section 230 protection.


          Second, even if Google did lose Section 230 protection, YouTube still would not have any legal obligation to uphold "free speech." Losing Section 230 protection would not compel YouTube to publish anything, including anything it might otherwise wish to moderate. What losing Section 230 would do, however, would be to make YouTube liable for that it does publish, i.e., it would be liable for defamation, copyright infringement, etc.


          The end result of losing Section 230 protection would actually be that YouTube would actually remove more content, because it would make Google more cautious of what it does publish. This is, incidentally, the whole entire reason that Section 230 was enacted in the first place.


          Google has no obligation to uphold the "free speech" of anyone. US government users on the YouTube platform have an obligation not to restrict engagement with government-published content on the platform, but this legally isn't Google's problem. Legally, Google can censor anyone it feels like censoring, for any reason it chooses, and the users basically just have to suck it.


          Yes, people like Glenn Beck have filed lawsuits alleging otherwise, but anyone who pays a filing fee can file anything they feel like filing; it doesn't mean it's remotely accurate or legally sound. These lawsuits get immediately thrown out effectively 100% of the time, because they have no legal merit whatsoever. They are just publicity stunts, and they are a waste of the courts' limited time and resources.

          • Greg Green

            The act provides for “material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable,”


            ‘Borderline’ videos are usually none of these.


            at one point tectonic plate theory was misinformation. Nearly everything in science today was at one point misinformation, and fifty years from now a good portion of current misinformation will become fact.


            it was, I believe, the Dean of Harvard medical school who told incoming students “half of what we’re going to tech you is wrong, we just don’t know which half yet.”


            silencing merely objectionable speech is a temporary benefit, but a long term loss.

  2. Yaggs

    Good thing people don't actually talk to each other anymore... Imagine all the misinformation that could be spreading that way. Just picture it... the government keeps an accuracy score for people and when your phone rings you get a note on the caller ID screen that says "Warning: This person may spread mis-information"

  3. Donte

    My advice, please get really good with configuring your ad-blocking/Privacy settings. Use every tool available to NOT give YouTube (Google) your data. I never, ever see ads. There are extensions that even fast forward through invideo sponsor ads. There are extensions that blast all trackers with random data to confuse them. Encrypt your DNS so where you are going cant be seen.


    Your data is GOLD to them. If the mostly young, woke, YouTube employees want to be God of censorship, make them PAY FOR IT.


    It is not a matter of if but when people finally have enough of this BS and new services will replace them. There are plenty trying and eventually there will be replacements. The likes of Facebook, Twitter, Google need to pay a big price, through big government fines and through users denying them data. Apple is probably right up there with the complete and total FARCE of a privacy stance they flaunt, as they pay the CCP 275+ billion to serve up the privacy of 1.4 billion Chinese so they can sell more Apple hardware.

    • rob_segal

      Only thing I'll say about this is blocking ads also hurts content creators.

      • arjay

        I’m okay with that. I’ll pay for content I like. If you make it compelling enough you don’t have to trick and track users to get them to read your stuff.

    • aaron_salter

      I want to do this more! Please reply with some tool names to use for this!

  4. simont

    I don't envy the people who have to decide on the specific content as their decision will most likely make some people unhappy no matter what they do.

  5. scovious

    That's ironic coming from an advertising company that peddles borderline lies and misinformation as a business model.

  6. arjay

    If you favor censorship, it tells me you do not know anything about history.


    Any tool like that could, in time, be used against you.

  7. Greg Green

    Not directly related to Youtube, but the same righteous trend:


    From BMJ (British Medical Journal): Facebook versus the BMJ: when fact checking goes wrong


    Howard Kaplan, a retired dentist from Israel, posted a link to a BMJ investigation article in a private Facebook group.


    “The Facebook Thought Police has issued me a dire warning. Facebook’s ‘independent fact-checker’ doesn’t like the wording of the article by the BMJ. And if I don’t delete my post, they are threatening to make my posts less visible.”


    Soon, several BMJ readers were alerting the journal to Facebook’s censorship. Over the past two months the journal’s editorial staff have been navigating the opaque appeals process without success, and still today their investigation remains obscured on Facebook.


    Kamran Abbasi, The BMJ’s editor in chief: “We should all be very worried that Facebook, a multibillion dollar company, is effectively censoring fully fact checked journalism that is raising legitimate concerns about the conduct of clinical trials.


    “Users should be worried that, despite presenting itself as a neutral social media platform, Facebook is trying to control how people think under the guise of ‘fact checking.’”


  8. red.radar

    In some respects this is a good thing because these algorithms just push people into echo chambers and condition them into extremists. Add a some stressed mental health and social media becomes the breading ground for terrorism.


    The only issue is these type of policies are not transparent in execution so its easy to condem this as censorship.



  9. lvthunder

    They already do this. Just ask almost every conservative who posts videos to YouTube. If they are going to do this they should be held financially responsible when they make mistakes. They should also notify the content creator that this has been done to their video.

    • bluvg

      Any stats for that? I get tons of Ben Shapiro, Jordan Peterson, etc. recommendations, along with Prager U ads, etc. I agree that the content creator should be notified.

      • miamimauler

        @bluvg


        "Any stats for that?"


        Of course he doesn't. It's just the usual right wing 'playing the victim' behavior.

      • lvthunder

        Well, Prager U has sued YouTube over some of this. Glenn Beck has complained about this a couple of weeks ago. They have taken some of his videos out of the search results as well.

        • bluvg

          Thank you for pointing that out, I didn't know about the Prager U suit, but it looks like it was dismissed? Although it appears more because it's a business and pretty much their call how to run their service (which ironically is a conservative principle). I would want to see their specific claims vs YT's ToS; not saying I have reason to doubt Prager U's claims, but they seem to get plenty of traction on YT. As for Glenn Beck, well... he claims a lot of things.


          I think the bigger question may be whether YT applies its ToS fairly (though it's entirely their prerogative how they run their service). Left, right, or outer space, if applied reasonably consistently, seems fair to me. Everyone will (and does) whine. It's just ironic that the right, while railing against infringements of free speech and about the media and millennial whiners and perceived oppression in the culture, cynically exploits whining of their own oppression while seeking to silence or intimidate the media ("enemy of the people") and dissenting voices. Without one hint of hypocrisy.

  10. bbennett40

    Hail to our overlords. Deciders of all truth and misinformation.

  11. Serling

    The future is just Fahrenheit 451 modified for “regional nuances”.


  12. thalter

    I have no problem with this. Google clearly says they are not removing this "borderline" content - they are just not going to be promoting it to others algorithmically. One of the big problems with algorithms used by YouTube, Facebook, and others is that it uses popularity as one of its signals, which can quickly spread misinformation. Just because something is popular doesn't make it true.

    • lvthunder

      There is a difference between recommending a video and that video not showing up when you search for it.

      • thalter

        Read the article. I saw nothing about blocking borderline videos from searches. They are just not going to recommend them. Nothing to see here.

        • lvthunder

          A couple of weeks ago Glenn Beck showed that YouTube was removing some of his videos from search.

          • miamimauler

            @Ivthunder


            Lol, Glenn Beck. That lying filth of a human being wouldn't know the truth if it slapped him in the face.

  13. darkgrayknight

    They need to not worry about it. War of the Worlds was "misinformation" in that people acted on what they thought was real, though it was just a story read over the radio. What about all the tabloids? People think those are also real. I don't think it is possible to monitor videos for "false" information without eventually just blocking/flagging things that might be true, but don't sound like what is desired or fits what society at large wants to believe.

  14. Daekar

    How many different ways can you spell "censorship?" This nonsense is getting old, they just need to make a press release saying they want to control what you're allowed to say on their platform and make a continuously updated list of sentimenta non grata.

  15. bluvg

    "You like flat earth videos? Here are some other things you might like...."


    I'm have the same reflex many have against any kind of censorship, but the oncoming wave of deep fakes and constructed audio along with some of the algorithmically cracked-nut of homo sapiens weakness presents society-level epistemological issues. (Perhaps this is how computers "take over the world"?) It's going to get worse before it gets better--if it ever gets better.


    Solving this issue without materially affecting free speech is a "hard computer science problem," as Microsoft might put it. The implications of not solving it are huge. For existential reasons, we may have to decide what tradeoffs we can accept, just as free speech is already not unlimited.

    • lvthunder

      Maybe we just need to educate people so they aren't so gullible.

      • bluvg

        I admire your optimism, but looking at the state of the world... ? Considering there are several times (that we know of) where it came down to a single person's decision--despite significant pressure otherwise--that prevented nuclear war and the end of life as we know it, more practical caution and less hubris around our high ideologies is probably warranted.

  16. me

    I can rarely find anything interesting on Youtube these days. Spend more time On Rumble. Just watching these sites such as Youtube kill themselves.