Starting next Tuesday, March 18, Meta will launch a test in the United States that promises to reshape how content is corrected on its platforms. Facebook, Instagram, and Threads will begin experimenting with “community notes,” a feature set to replace the traditional third-party fact-checking system. Announced by Meta CEO Mark Zuckerberg in January, this shift in content moderation policy has already sparked discussions among experts and users about its potential impact on the spread of information.
The decision to move away from professional fact-checking to a crowdsourced model, inspired by the Community Notes system on Elon Musk’s X, was detailed in a Meta statement released on Thursday, March 13. The company revealed that approximately 200,000 people have signed up to participate in the initial testing phase, which will be limited to the US market. This change comes amid Zuckerberg’s efforts to align more closely with the incoming Donald Trump administration, raising questions about the future of online moderation.
While Meta touts the move as a step toward greater freedom of expression, critics warn of heightened risks of misinformation on sensitive topics like health and politics. The beta test, rolling out gradually, marks the end of an era that began in 2016 when the company introduced its fact-checking program in response to backlash over fake news during that year’s US presidential election.
Why is Meta shifting its approach?
The transition to “community notes” stems from Zuckerberg’s critique of the previous model’s shortcomings. In a five-minute video posted on Instagram in January, he argued that third-party fact-checkers were “politically biased” and that the system, initially designed to foster inclusivity, had morphed into a tool to “silence opinions and exclude people with differing views.” For Zuckerberg, this overhaul is a return to Meta’s core values, prioritizing free speech.
In the US, where the test will kick off, Meta plans a cautious rollout. The user-written and evaluated notes won’t be publicly visible right away. Instead, the company will gradually introduce them to a select group beyond the initial waitlist, refining the system before making it widely available. This means existing third-party fact-checking labels will persist temporarily but will phase out once the new model takes hold.
The inspiration from X is no accident. Since Elon Musk took over the platform in 2022 and launched Community Notes, the system has earned praise from free speech advocates but faced scrutiny for its uneven handling of misinformation. Zuckerberg, who has expressed admiration for Musk’s approach, sees “community notes” as a way to cut down on what he calls “mistakes and censorship” in moderation.
Rolling out soon, Community Notes will be a new way for our community to decide when content is confusing or may need additional context. It is replacing our prior fact checking program that has been seen, by many, as biased. https://t.co/qgKveIGHvR pic.twitter.com/OdjR821HJW
— Meta Newsroom (@MetaNewsroom) February 20, 2025
What are ‘community notes’ and how will they work?
Unlike the prior system, where independent journalistic organizations reviewed content, “community notes” shift the responsibility to users. Anyone can apply to write and rate notes that add context or correct information in posts across Facebook, Instagram, and Threads. Meta emphasizes that notes will only go public after achieving consensus among contributors with “diverse perspectives,” mirroring the mechanism used by X.
In practice, this means a potentially misleading post could display a visible note beneath it, clarifying its accuracy or providing additional details. Meta shared visual examples of the feature on Instagram, illustrating how notes will blend into the feed. Unlike before, posts with notes won’t automatically see reduced visibility, a shift Meta frames as part of its new “less intervention” stance.
Roughly 200,000 people have already signed up to test the feature in the US, with registration still open. Meta aims to roll it out to all Americans once refinements are complete, though no timeline has been set for a global launch, which will hinge on regulatory reviews in other regions.
A turning point in content moderation
Meta’s decision to phase out third-party fact-checking ends a program that dates back nearly a decade. Launched in December 2016, it was a direct response to the flood of misinformation during the US presidential election won by Donald Trump. At the time, Facebook faced intense criticism for allowing fake news to spread, much of it amplified by foreign campaigns.
Since then, Meta partnered with dozens of fact-checking organizations worldwide, including in Brazil with outlets like Aos Fatos and Lupa. In the US, groups like PolitiFact and FactCheck.org played key roles. These partners, however, were caught off guard by January’s announcement, with some expressing frustration over the lack of prior consultation, as reported by Reuters.
With the “community notes” test starting March 18, Meta is charting a new course. The company also noted that former fact-checkers can join as note contributors, though without their previous privileged status. This shift is widely seen as a gesture toward the Trump administration, which has long criticized big tech moderation as stifling conservative voices.
Risks and critiques of the new model
Experts consulted by g1 caution that the change could pave the way for problematic content to flourish. Yasmin Curzi, a researcher at the University of Virginia, warns that abandoning professional oversight might allow misinformation about vaccines, elections, and even hate speech—like racism and homophobia—to spread unchecked, as long as it’s not “explicitly illegal.” She argues the crowdsourced approach may struggle to catch nuanced or complex falsehoods.
Helena Martins, a professor at the Federal University of Ceará and member of the Rights on the Network Coalition, echoes this concern. She notes that only severe violations, such as child pornography or terrorism, will remain targets of proactive action by Meta. Other critical areas, like Covid-19 conspiracy theories, will rely on user initiative for correction, potentially overwhelming the community and leaving gaps in oversight.
On X, where the model has been active for years, studies highlight flaws. An analysis by the Centre for Countering Digital Hate found that between January and October 2024, 74% of corrective notes on misleading political posts weren’t displayed, while the original posts racked up billions of views. This suggests Meta’s system could face similar hurdles, especially across platforms with billions of users.
Testing timeline in the US
The “community notes” rollout will follow an initial schedule in the United States:
- March 18: Beta test begins with enrolled participants and a random group.
- First weeks: System tweaks for note drafting and rating.
- Public phase: Gradual release to all US users, date TBD based on test outcomes.
- Transition: Third-party fact-checking labels phased out once notes are established.
Meta stresses that the process will be closely monitored to ensure effectiveness before any broader rollout. Outside the US, such as in the European Union and Brazil, the current fact-checking program will continue pending local regulatory assessments.
Expected impacts on the platforms
With traditional fact-checking on its way out, Meta plans to loosen restrictions on controversial topics. Zuckerberg announced in January that issues like immigration and gender identity, previously curtailed, will flow freely again, provided they don’t breach rules against illegal content. Political posts, dialed back in feeds at users’ request in recent years, will also return in a “personalized” form.
This shift could transform the experience on platforms like Instagram, where lighthearted photos and videos have dominated. On Facebook, the resurgence of political debates may reignite heated exchanges, while Threads, a direct rival to X, could become a proving ground for the new system. Meta insists algorithms will still prioritize “friendly and positive” environments but with less interference in what users see.
The move aligns with internal changes. Meta’s trust and safety team, once based in California, will relocate to Texas—a state viewed as more in tune with Zuckerberg’s and the Trump administration’s free speech stance. This relocation underscores perceptions of Meta adapting to the current US political climate.
What users can expect
For the roughly 3 billion users across Meta’s platforms, the US test hints at bigger changes ahead. Those on Facebook, Instagram, or Threads may soon contribute notes or see community corrections on questionable posts. The company bets the model will be “less biased” and more scalable than professional fact-checking, though Zuckerberg admits it might “catch fewer bad things.”
Key features of the new system include:
- Open applications for anyone to become a contributor.
- Notes displayed only after validation from diverse viewpoints.
- Emphasis on adding context rather than penalizing posts directly.
In the US, where political polarization runs deep, the feature could amplify both diverse voices and misleading narratives, depending on user engagement. X’s experience suggests successful notes require active participation and reliable sourcing—something Meta will need to foster on a massive scale.

Starting next Tuesday, March 18, Meta will launch a test in the United States that promises to reshape how content is corrected on its platforms. Facebook, Instagram, and Threads will begin experimenting with “community notes,” a feature set to replace the traditional third-party fact-checking system. Announced by Meta CEO Mark Zuckerberg in January, this shift in content moderation policy has already sparked discussions among experts and users about its potential impact on the spread of information.
The decision to move away from professional fact-checking to a crowdsourced model, inspired by the Community Notes system on Elon Musk’s X, was detailed in a Meta statement released on Thursday, March 13. The company revealed that approximately 200,000 people have signed up to participate in the initial testing phase, which will be limited to the US market. This change comes amid Zuckerberg’s efforts to align more closely with the incoming Donald Trump administration, raising questions about the future of online moderation.
While Meta touts the move as a step toward greater freedom of expression, critics warn of heightened risks of misinformation on sensitive topics like health and politics. The beta test, rolling out gradually, marks the end of an era that began in 2016 when the company introduced its fact-checking program in response to backlash over fake news during that year’s US presidential election.
Why is Meta shifting its approach?
The transition to “community notes” stems from Zuckerberg’s critique of the previous model’s shortcomings. In a five-minute video posted on Instagram in January, he argued that third-party fact-checkers were “politically biased” and that the system, initially designed to foster inclusivity, had morphed into a tool to “silence opinions and exclude people with differing views.” For Zuckerberg, this overhaul is a return to Meta’s core values, prioritizing free speech.
In the US, where the test will kick off, Meta plans a cautious rollout. The user-written and evaluated notes won’t be publicly visible right away. Instead, the company will gradually introduce them to a select group beyond the initial waitlist, refining the system before making it widely available. This means existing third-party fact-checking labels will persist temporarily but will phase out once the new model takes hold.
The inspiration from X is no accident. Since Elon Musk took over the platform in 2022 and launched Community Notes, the system has earned praise from free speech advocates but faced scrutiny for its uneven handling of misinformation. Zuckerberg, who has expressed admiration for Musk’s approach, sees “community notes” as a way to cut down on what he calls “mistakes and censorship” in moderation.
Rolling out soon, Community Notes will be a new way for our community to decide when content is confusing or may need additional context. It is replacing our prior fact checking program that has been seen, by many, as biased. https://t.co/qgKveIGHvR pic.twitter.com/OdjR821HJW
— Meta Newsroom (@MetaNewsroom) February 20, 2025
What are ‘community notes’ and how will they work?
Unlike the prior system, where independent journalistic organizations reviewed content, “community notes” shift the responsibility to users. Anyone can apply to write and rate notes that add context or correct information in posts across Facebook, Instagram, and Threads. Meta emphasizes that notes will only go public after achieving consensus among contributors with “diverse perspectives,” mirroring the mechanism used by X.
In practice, this means a potentially misleading post could display a visible note beneath it, clarifying its accuracy or providing additional details. Meta shared visual examples of the feature on Instagram, illustrating how notes will blend into the feed. Unlike before, posts with notes won’t automatically see reduced visibility, a shift Meta frames as part of its new “less intervention” stance.
Roughly 200,000 people have already signed up to test the feature in the US, with registration still open. Meta aims to roll it out to all Americans once refinements are complete, though no timeline has been set for a global launch, which will hinge on regulatory reviews in other regions.
A turning point in content moderation
Meta’s decision to phase out third-party fact-checking ends a program that dates back nearly a decade. Launched in December 2016, it was a direct response to the flood of misinformation during the US presidential election won by Donald Trump. At the time, Facebook faced intense criticism for allowing fake news to spread, much of it amplified by foreign campaigns.
Since then, Meta partnered with dozens of fact-checking organizations worldwide, including in Brazil with outlets like Aos Fatos and Lupa. In the US, groups like PolitiFact and FactCheck.org played key roles. These partners, however, were caught off guard by January’s announcement, with some expressing frustration over the lack of prior consultation, as reported by Reuters.
With the “community notes” test starting March 18, Meta is charting a new course. The company also noted that former fact-checkers can join as note contributors, though without their previous privileged status. This shift is widely seen as a gesture toward the Trump administration, which has long criticized big tech moderation as stifling conservative voices.
Risks and critiques of the new model
Experts consulted by g1 caution that the change could pave the way for problematic content to flourish. Yasmin Curzi, a researcher at the University of Virginia, warns that abandoning professional oversight might allow misinformation about vaccines, elections, and even hate speech—like racism and homophobia—to spread unchecked, as long as it’s not “explicitly illegal.” She argues the crowdsourced approach may struggle to catch nuanced or complex falsehoods.
Helena Martins, a professor at the Federal University of Ceará and member of the Rights on the Network Coalition, echoes this concern. She notes that only severe violations, such as child pornography or terrorism, will remain targets of proactive action by Meta. Other critical areas, like Covid-19 conspiracy theories, will rely on user initiative for correction, potentially overwhelming the community and leaving gaps in oversight.
On X, where the model has been active for years, studies highlight flaws. An analysis by the Centre for Countering Digital Hate found that between January and October 2024, 74% of corrective notes on misleading political posts weren’t displayed, while the original posts racked up billions of views. This suggests Meta’s system could face similar hurdles, especially across platforms with billions of users.
Testing timeline in the US
The “community notes” rollout will follow an initial schedule in the United States:
- March 18: Beta test begins with enrolled participants and a random group.
- First weeks: System tweaks for note drafting and rating.
- Public phase: Gradual release to all US users, date TBD based on test outcomes.
- Transition: Third-party fact-checking labels phased out once notes are established.
Meta stresses that the process will be closely monitored to ensure effectiveness before any broader rollout. Outside the US, such as in the European Union and Brazil, the current fact-checking program will continue pending local regulatory assessments.
Expected impacts on the platforms
With traditional fact-checking on its way out, Meta plans to loosen restrictions on controversial topics. Zuckerberg announced in January that issues like immigration and gender identity, previously curtailed, will flow freely again, provided they don’t breach rules against illegal content. Political posts, dialed back in feeds at users’ request in recent years, will also return in a “personalized” form.
This shift could transform the experience on platforms like Instagram, where lighthearted photos and videos have dominated. On Facebook, the resurgence of political debates may reignite heated exchanges, while Threads, a direct rival to X, could become a proving ground for the new system. Meta insists algorithms will still prioritize “friendly and positive” environments but with less interference in what users see.
The move aligns with internal changes. Meta’s trust and safety team, once based in California, will relocate to Texas—a state viewed as more in tune with Zuckerberg’s and the Trump administration’s free speech stance. This relocation underscores perceptions of Meta adapting to the current US political climate.
What users can expect
For the roughly 3 billion users across Meta’s platforms, the US test hints at bigger changes ahead. Those on Facebook, Instagram, or Threads may soon contribute notes or see community corrections on questionable posts. The company bets the model will be “less biased” and more scalable than professional fact-checking, though Zuckerberg admits it might “catch fewer bad things.”
Key features of the new system include:
- Open applications for anyone to become a contributor.
- Notes displayed only after validation from diverse viewpoints.
- Emphasis on adding context rather than penalizing posts directly.
In the US, where political polarization runs deep, the feature could amplify both diverse voices and misleading narratives, depending on user engagement. X’s experience suggests successful notes require active participation and reliable sourcing—something Meta will need to foster on a massive scale.
