Facebook Starts Giving Users a Reputation Score to Tackle Fake News

Facebook has long been at the centre of all the controversy surrounding the spread of fake news ever since the U.S. Presidential Election. Especially after the Cambridge Analytica privacy controversy, the social network giant has been under fire.

So far, Facebook has implemented multiple measures that help users detect fake news on the platform, and also reduce the spread of fake news. The company is now taking things a bit further by rating users on trustworthiness. Yes, Facebook is assigning users with a reputation score to better fight fake news on its platform.

Windows Intelligence In Your Inbox

Sign up for our new free newsletter to get three time-saving tips each Friday — and get free copies of Paul Thurrott's Windows 11 and Windows 10 Field Guides (normally $9.99) as a special welcome gift!

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Exactly how the new reputation score is being used is unclear.

Facebook says it measures a bunch of different factors to determine a user’s reputation score, though it’ll specifically monitor things like what publishers on Facebook are considered trustworthy by a user, what kind of posts they flag as false, etc. The point of the reputation score is to help understand the trustworthiness of a user so that the company can use it to better fight fake news.

Facebook already allows users to report fake news on the platform, but the problem was that users who didn’t agree with certain articles or publishers were falsely reporting those as fake news. That way, Facebook’s fact checkers are wasting a ton of time reviewing reports from users that are simply reporting content because they disagree with it. The new reputation could be used to help prevent that, as Facebook will now give more importance to reports from users with a higher trustworthiness than ones with much lower trustworthiness.

The reputation score still presents a ton of questions, though. Facebook is being very tight-lipped about whether the reputation score affects the News Feed itself, and whether users with a lower reputation score are less likely to get their post displayed on the News Feed to their friends. It’s also unclear whether publishers who have a large audience of low reputation scores are penalized on the News Feed.

Tagged with

Share post

Please check our Community Guidelines before commenting

Conversation 18 comments

  • overseer

    21 August, 2018 - 10:54 am

    <p>Well this should certainly reduce the amount of double-wrong bad-think on the platform. </p>

  • jimchamplin

    Premium Member
    21 August, 2018 - 11:01 am

    <p>Wonder how the trolls are going to figure out how to use this as a weapon.</p>

  • Daekar

    21 August, 2018 - 12:12 pm

    <p>You know, years ago I used to think I was a bit paranoid when it came to large companies and censorship. If the last two years has taught me nothing else, it's that I wasn't the least bit paranoid. It's like Christmas for control freaks.</p>

  • stvbnsn

    Premium Member
    21 August, 2018 - 12:57 pm

    <p>The best way to fight fake news on Facebook, is to not get your news from the Facebook feed. </p>

    • wright_is

      Premium Member
      22 August, 2018 - 3:03 am

      <blockquote><em><a href="#303293">In reply to stvbnsn:</a></em></blockquote><p>Way ahead of you. None of my immediate family on on Facebook. We get our news from reputable, independent sources.</p>

  • karlinhigh

    Premium Member
    21 August, 2018 - 3:06 pm

    <p><em style="color: rgb(0, 0, 0);">"the problem was that users who didn’t agree with certain articles or publishers were falsely reporting those as fake news"</em></p><p><br></p><p><span style="color: rgb(0, 0, 0);">Maybe Facebook can give a reputation decrease if someone says "fake" when they meant "disagree?"</span></p>

  • skane2600

    21 August, 2018 - 3:56 pm

    <p>Well, the important thing is not having reputation determined by users. That almost always leads to abuse.</p>

  • glenn8878

    21 August, 2018 - 8:31 pm

    <p>Considering the only news that get thru Facebook’s censors are pre-screened for content, all Facebook’s news are fake. Goodbye Facebook. </p>

    • skane2600

      23 August, 2018 - 2:17 am

      <blockquote><em><a href="#303411">In reply to glenn8878:</a></em></blockquote><p>I always thought that Facebook was a goofy place to get news, but your statement doesn't really make any sense. Not to mention that newspapers have been editing content for centuries.</p>

  • Awhispersecho

    Premium Member
    21 August, 2018 - 9:18 pm

    <p>1st of all, the "fake news" term has got to die. People seem to forget what it means. Secondly, Noone should be censoring or labeling anything. It's up to the readers to decide. As it currently stands, people and companies on a certain side with a certain agenda censor or label anything that goes against their beliefs or agenda as fake news. Which, since it's Facebook and Twitter doing it to certain groups with certain beliefs, everyone seems to love. It's amazing how according to them, every single thing that is labeled as fake or being censored comes from people or headlines that don't agree with them. </p><p><br></p><p>What if Apple started its own social networking site and they and their users labeled everything positive Paul and Mary Jo said about Microsoft as fake news. What if they labeled every article Paul wrote that criticized an Apple product or service as fake news and then banned him. Would that be OK? What if the NRA created a huge social network site and censored and labeled every article that referenced gun violence and then banned those that wrote about it. Would people still be OK with it. </p><p><br></p><p>These are the questions that need to be asked before giving tech companies and general users the power to use their personal opinions and beliefs to decide what can be written and read by others. Because that's what is going on now. It's not about fake news, it's about shutting down anyone or anything that doesn't agree with them. This is how you become China. </p><p><br></p><p>https://newyork.cbslocal.com/2018/04/24/china-assigns-every-citizen-a-social-credit-score-to-identify-who-is-and-isnt-trustworthy/</p&gt;

    • wright_is

      Premium Member
      22 August, 2018 - 3:02 am

      <blockquote><em><a href="#303414">In reply to Awhispersecho:</a></em></blockquote><p>I agree up to a point. Except where it becomes obviously false and misleading. Certainly the term "fake news" needs to disappear.</p><p>A classic example being Trump claiming his inaugeration had more spectators than Obama, whereas the phot evidence showed a sparse crowd for Trump and a large crowd for Obama. I don't want to get into politics here, who is better or worse, just the simple fact, there were fewer people by Trump's inaugaration.</p><p>Therefore what Trump wrote was inaccurate and misleading. For such verifiable facts, then I think there should be labelled as inaccurate.</p><p>Holocaust denial is another one (in fact, over here, it is illegal to deny the holocaust).</p><p>On the other hand, you have conspiracy theories, like the moon landings never happened or the JFK assassination. These are in a grey area. I don't see the problem is marking them as conspiracy theory. They certainly shouldn't be quashed.</p><p>It is a fine line to tread, but a lot of what is "obviously" false or misleading to well educated people or people who interest themselves for a subject, can still be taken as fact by the masses, who don't know the ins-and-outs of the topic at hand. For them, it would certainly be useful to have a notice indicating the credulity of the article.</p>

      • Daekar

        22 August, 2018 - 12:10 pm

        <blockquote><em><a href="#303437">In reply to wright_is:</a></em></blockquote><p>Except the question is always, "who determines what's credible?" For example, science tells us that antibiotics have a period of time after which they have completely cycled out of the system of an animal, and the USDA (I think it's them) mandates that this period must be complete before an animal can enter the food system. However, we are warned not to purchase livestock products unless they guarantee that the animal was never given antibiotics for any reason. Ethical and epidemiological problems with this aside (ask a farmer how much they like to watch their animals suffer from disease or spread infection to the whole flock/herd), the zeitgeist seems to believe it. What happens when someone posts something warning against eating products from animals that were given antibiotics? It's unscientific, unethical, dangerous, and impractical to <em>not</em> give antibiotics when necessary – are they going to get a credibility ding for that? Or will the fact that they're telling lies about one of this generations' favorite whipping boys mean that they get away with it or get rewarded?</p><p><br></p><p>The problem is, Silicon Valley and the majority aren't always right. In fact, they're frequently wrong about a lot of very important things. Enshrining either via a system like this is just dangerous.</p>

      • crmguru

        Premium Member
        22 August, 2018 - 7:21 pm

        <blockquote><em>Why is it up to Facebook to fact-check the news. I can fact check the news . It is not a service in which I trust that they're going to do a good job. I'm not buying their services or trading my time for their service in order to be fully informed on any specific political issue or news items. IPay Wall Street Journal or read my local news to be informed on these types of subjects. <a href="#303437">In reply to wright_is:</a></em></blockquote><p><br></p>

  • kjb434

    Premium Member
    21 August, 2018 - 9:45 pm

    <p>Reminds me of the anime "Psycho Pass". Uhoh, your score isn't in a good range, good bye. In the anime you were killed if you were in the safe range.</p><p><br></p>

  • wright_is

    Premium Member
    22 August, 2018 - 2:32 am

    <p>Aha, Mark has just finished reading Qualityland by Marc-Uwe Kling and hasn't realised it was satire!</p>

  • crmguru

    Premium Member
    22 August, 2018 - 7:19 pm

    <p>This is not going to end well. Facebook's insistence on trying to understand or somehow moderate or mediate the validity of claims or news is crazy. To do this programmatically or using some sort of AI is only ending in disaster. They need to stop trying to be a arbiter of Truth like a newspaper and stick to cat videos and pictures of our kids.</p>

    • Ezzy Black

      Premium Member
      24 August, 2018 - 9:39 am

      <blockquote><em><a href="#303605">In reply to crmguru:</a></em></blockquote><p>Newspapers and Television quit being arbiters of the truth long ago. Fox News won a court case twenty years ago using the argument that they aren't a news organization, they're an entertainment organization, therefore have no responsibility to be factual. Really, under oath, in a court of law (in Tampa if you want to Google it).</p>

  • William Clark

    22 August, 2018 - 10:30 pm

    <p>Can't wait to see how this is co-opted and used against users and businesses on FB.</p>

Windows Intelligence In Your Inbox

Sign up for our new free newsletter to get three time-saving tips each Friday

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Thurrott © 2024 Thurrott LLC