Meta Launches Community Notes on Facebook and Instagram

Brace yourself for a new influx of misleading posts on Facebook and IG, with Meta announcing that it’s launching the first stage of its Community Notes rollout early next week.

Well, it probably won’t actually lead to a sudden rush of misleading posts, as people are posting way less to Meta’s apps than they have in the past anyway, and a sudden switch to crowd-sourced fact-checking isn’t going to change that. But evidence does suggest that some posts, on some topics, are now likely to become more visible within both apps, as a result of Meta’s moderation shift.

That’ll bring the company more into line with the preference of the Trump Administration, which has long opposed censorship of Trump’s statements specifically. But again, there could be some other problems, which, at Facebook’s scale, could be significant.

First off, here’s a look at how Community Notes will look on Facebook, with an example of how a contributor will submit a note on a post.

As you can see in these example screens, Meta’s Community Notes system will function in exactly the same way as X’s Community Notes process, with Meta’s initial model built on the back of X’s open source Community Notes framework.

So it not only looks the same, it is the same, just in a Facebook/IG format.

Once a Notes contributor has submitted a new note, it will then be rated by other approved contributors to Meta’s Community Notes system.

And again, just like X, Meta’s process will factor political bias to dilute any potential conflict:

“Meta won’t decide what gets rated or written – contributors from our community will. And to safeguard against bias, notes won’t be published unless contributors with a range of viewpoints broadly agree on them. No matter how many contributors agree on a note, it won’t be published unless people who normally disagree decide that it provides helpful context.”

This is a relevant consideration, but it’s also a key flaw in the Community Notes system, because while it does help to weed out political bias, and ensure that certain points of dispute are not silenced, it also means that many notes that should be displayed are never going to reach the user screens in each app.

Indeed, according to analysis conducted by the Center for Countering Digital Hate (CCDH) last year, 73% of Community Notes related to political topics are never displayed on X, despite these notes providing valuable context.

The chart above shows the most common subjects of disagreement, and you can see how some of these issues, despite having clear, correct answers, will never be agreed upon by people of opposing political viewpoints.

Another study conducted earlier this year by Spanish fact-checking site Maldita found that 85% of all Community Notes are never displayed to X users.

In some respects, that indicates that the process is working, because it’s weeding out more questionable, potentially biased notes. But at the same time, in many cases, these notes are valid, and should be displayed, but they’re not because raters don’t agree.

Add to this the fact that X’s Community Notes system has also been infiltrated by organized groups of contributors who collaborate to up and downvote notes, and you can see how this system is not foolproof, and is going to enable certain elements of misinformation to proliferate in Meta’s apps, as they do on X.

Still, Meta’s moving ahead, with notes set to be displayed on posts as an expandable contextual marker.

Users will also be able to rate notes as helpful or not, which will help Meta refine its Notes system.

Meta’s taking a cautious initial approach with Community Notes, by first rolling it out to users in the U.S., before moving on to other markets.

“Around 200,000 potential contributors in the U.S. have signed up so far across all three apps, and the waitlist remains open for those who wish to take part in the program. But notes won’t initially appear on content. We will start by gradually and randomly admitting people off of the waitlist, and will take time to test the writing and rating system before any notes are published publicly.”

Notes also won’t include reach penalties, like fact-checks did, with Meta instead moving to put more reliance on its user community to moderate and inform, as opposed to implementing its own countermeasures.

And when notes do start appearing in a region, third-party fact-checks will disappear.

Which is another valid point: For the majority of Facebook and IG users, third-party fact checks are still in effect, and will be for the foreseeable future.

But over time, as Meta revises and refines its Community Notes approach, it will expand it, shifting away from third-party confirmation, and towards crowd-sourced moderation.

Which Meta hopes will be a better solution:

“We expect Community Notes to be less biased than the third party fact checking program it replaces, and to operate at a greater scale when it is fully up and running. When we launched the fact checking program in 2016, we were clear that we didn’t want to be the arbiters of truth and believed that turning to expert fact checking organizations was the best solution available. But that’s not how it played out, particularly in the United States. Experts, like everyone else, have their own political biases and perspectives. This showed up in the choices some made about what to fact check and how.”

In order to support this shift, Meta has cited several studies that reinforce the effectiveness of Community Notes:

  • In 2024, a University of Luxembourg study found that exposing users to the context contained in Community Notes on X reduced the spread of misleading posts by an average of more than 60%. 
  • A study published in Science found that crowdsourced fact-checking can be just as accurate as traditional fact-checking, is more scalable, and is seen as more trustworthy because it doesn’t carry perceptions of “bias.”
  • A study published last year by the journal National Academy of Sciences (PNAS) Nexus found that the user-provided context of Community Notes results in significantly higher trustworthiness compared to traditional fact-checking.

And I don’t doubt any of these findings, but they still don’t account for the gaps caused by factoring in political bias, and not displaying notes when such can’t be achieved.

Sure, people are less likely to share posts with a Community Note displayed, but that’s only relevant if a note is actually shown in the first place, which, as noted above, in the vast majority of cases doesn’t happen. People trust Community Notes more, but again, that’s only if they see them, while notes that are shown are likely to be accurate, due to cross-checked consensus.

In essence, these studies indicate that Community Notes are effective when they’re present, but again, reports also indicate that they’re not displayed in many, many cases, despite there being a valid case to include them.

Which, in a time where the President himself is prone to amplifying misinformation, seems like a significant flaw in the process.

Do you think that if Trump says that something is true, that his supporters will agree, even if it can be proven otherwise? I’d hazard a guess that they won’t question such statements, and if they don’t agree that a fact check is necessary on such, one won’t be shown. Case closed.

Does that seem like an effective, viable, better approach than consulting with experts?

We’ll soon find out. Meta says that it’s launching the first stage of its Community Notes project in the U.S. from Tuesday March 18th.

Comments (0)
Add Comment