Meta, the parent company of Facebook and Instagram, announced on Jan. 7 that they would be replacing corporate fact-checking with a system called “Community Notes,” a model also implemented by Elon Musk on X in 2022. The model allows for more posts to stay online as opposed to being immediately filtered. However, this decision will have impacts that go beyond CEO Mark Zuckerberg’s original goals of expanding free speech. In reality, they are really creating a community rife with misinformation and hateful language.
The Community Notes plan allows users to flag posts that are misleading or potentially harmful to others, which was the job description of the former Meta staff who have since been laid off post Community Notes. “[Fact checkers], like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact check and how,” Meta Chief Global Affairs Officer Joel Kaplan wrote in a blog post entitled “More Speech and Fewer Mistakes” Meta leaders cited political bias by their fact-checking team during the presidential elections as the reason for their decision.
The 2024 election acts as a perfect example of why this model is a bad idea. Misinformation, which plagued American voters across the board, will grow exponentially under Community Notes. According to a study conducted by the Pew Research Center, of the 69% of adults who were closely following the election, 73% reported exposure to misinformation.
Social media platforms played a substantial part in spreading misinformation. Whether it be AI generated videos of then-presidential candidate Kamala Harris hugging dictators, or false claims about President Donald Trump’s economic policy, there was foul play on both sides, impacting voting decisions in many states. An analysis from Brookings Institution, a New York-based think tank, found that the oversaturation of misinformation during the election was in direct correlation with voting choices. For example, false claims made online about scarcity of resources at the border due to immigration influenced many who deemed immigration as their main voting issue. This instance and more proved how the general media and misinformation significantly impacted voting decisions.
But there is more disinformation than what just surrounded the election—false information about healthcare, international relations, and domestic issues continues to harm citizens. In times like these, the presence of true facts on social media is more important than ever. Yet, Meta is moving backwards with Community Notes.
During the election, Meta mitigated misinformation with the very team of fact-checkers they are now choosing to fire. This then leaves the general public, who is clearly not reliable when it comes to determining what is true or false, to do that same job. Those who have been influenced by misinformation cannot be the ones responsible for flagging and removing it from sites. A gap in critical thinking is what is responsible for the problem in the first place. This means that misinformation will only spread faster, leaving citizens in a dangerous place when it comes to finding out the truth of important issues.
This isn’t the only issue. Community Notes could also increase the spread of hateful language. A clear example is X’s transition to Community Notes. Hateful speech and racist comments have proliferated across the site. Zuckerberg has acknowledged the possibility of hateful comments on the app. “[Community Notes may mean] we’re going to catch less bad stuff,” Zuckerberg said in a video issued by the company.
Knowing that hate speech on your platform will increase is reason enough not to transition to Community Notes. The vulgar phrases and issues, that could be spread without direct supervision from the app, are dangerous for the mental health of those receiving them, and free reign of social media without direct regulation creates the perfect breeding ground. The Community Notes model is incredibly worrisome.