In early 2026, Meta began changing how it moderates content, especially in the United States. The company is moving away from using third-party fact-checkers, a practice in place prior to 2026, and is shifting to a more open, user-driven approach from 2026 onwards. Some reports call this a U-turn: the new strategy lowers restrictions on political speech, relies more on user-moderated community networks, and addresses criticism that earlier models were politically biased.
The Three-Hour Rule Context (International Versus US)
Although there are questions about a 3-hour rule in the US, recent reports indicate that this rule is a new regulation in India, introduced in early 2025 and effective on February 20, 2026.
India’s 3-hour rule: The Ministry of Electronics and IT mandated that social media platforms, including Meta (Facebook/Instagram), remove fake, unlawful, or deepfake AI content within 3 hours (down from 36). Meta execs, including Rob Sherman, say the three-hour deadline is operationally challenging and sometimes unrealistic. They say it does not provide enough time to properly investigate and review flagged content, which could result in the removal of legitimate speech.
Meta’s 2025-2026 US Political Strategy
In the United States, beginning in 2025, Meta adopted a different quiet strategy intended to reduce political volatility and appease critics of previous moderation policies. Meta is phasing out traditional third-party fact-checkers and replacing them with a community-node system similar to X’s. The company is loosening its rules, allowing more types of speech. Now it focuses on enforcement, primarily serious violations and illegal content, rather than general misinformation. Threads and other Meta services have said they will not boost or promote political content.
With these loser rules, Meta says it will still enforce policies against voter interference and serious election-related violence.
The Quiet Struggle of Real-Time Moderation
The challenge of moderating content in real time involves acting quickly, as in India’s three-hour rule, while still allowing enough time to avoid mistakes and ensure a fair process. Companies must act so quickly that they rely more on automated AI moderation, which can lead to mistakes, most notably when managing complex local or satirical political comments to avoid legal trouble and maintain their safe harbor protections. Like in India, companies often remove content first and review it later. This approach can sometimes silence legitimate political speech.
Many have seen this move toward less intervention as Meta’s response to a more free speech-focused, right-leaning political climate in the United States.
In summary, while India’s three-hour rule represents a concrete regulatory shift, the US scene is defined by Meta’s quieter, voluntary pivot toward user-migrated content and reduced intervention.
Meta, the owner of Facebook, Instagram, and WhatsApp, announced a major strategy shift today: It is overhauling its content moderation policies to reduce rule complexity and address concerns about over moderation, especially around political and health topics.
In a blog post titled “More speech, fewer mistakes matter,” new Global Affairs Officer Joel Kaplan explained three main changes to, in his words, undo the mission creep.
- Meta will end its third-party fact-checking program and switch to a Community Notes model in the coming months. This change will allow users to annotate and contextualize posts, similar to how it works on x.com, giving the community more influence over flagging and clarifying information.
- The Company will remove restrictions on topics that are part of mainstream discussions, allowing more content on widely discussed issues. Instead, it will focus on enforcing rules against illegal and serious violations, such as:
Terrorism, child sexual exploitation, drugs, fraud, scams. As a result, users may notice fewer removals of popular or contentious posts, but enforcement will remain strict on the most serious offenses.
- Users will be encouraged to personalize the political content they see by adjusting their settings. People will have more control over the political opinions and viewpoints appearing in their feeds. This approach enables users to tailor their experience but may also increase the consistency of their information bubbles.
These changes are important, in part, because a new US presidential administration will take office later this month. Donald Trump and his supporters have said they want free speech to include a much broader range of opinions.
Facebook has faced criticism from Trump and his supporters over the past few years, especially after the company banned Trump from its platforms as part of its content moderation efforts.
The United States has strengthened its moderation after facing criticism for spreading misinformation, adding third-party fact-checking in 2016 during the U.S. presidential election.
This led to the creation of an oversight committee and tools to help users control and report content.
However, not everyone agrees with these policies. Some critics say the rules are too weak; others think they cause too many mistakes, and some believe the controls are too politically biased.
Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact-check and how to do it. Kaplan noted that Meta over-enforced its rules, limiting legitimate political debate and censoring too much trivial content. Meta now estimates that 1 to 2 of every 10 censored items were mistakes that didn’t violate policies.
Some might say these challenges are meant to win over the new administration, but some of these plans have been in progress for a while.
In the past year, Meta struggled to uphold its own rules. Nick Clegg admitted going too far with moderation, and the oversight board fell short of expectations.
Now, with political leadership turn-overs, Meta appears to be moving toward a more hands-off approach.
Meta’s platforms are built to be places where people can express themselves freely. That can be messy on sites where billions of people can have a voice. All the good, bad, and ugly is on display, but that’s free expression, Kaplan wrote.
The oversight board said it welcomes the news that Meta will revise its approach to fact-checking with the goal of finding an expandable solution to increase trust, free speech, and user voice on its platforms. The board added that it would work with Meta to shape its approach to free speech, signaling a deliberate shift to align with evolving external pressures and internal priorities.
CEO Mark Zuckerberg has signaled a stronger interest in working with, not battling, the Trump administration. Yesterday, the company appointed three new board members, including UFC head Dana White, a supporter of the incoming president, and last week, Meta replaced its non-time Public Affairs head Nick Clegg, promoting Kaplan to the role. Kaplan had already been part of the policy staff and was known as Meta’s most prominent Republican.
Meta is also making another change to prevent being stuck in its own echo chamber. Kaplan said, “We will be moving the trust and safety teams that write our current policies and review content out of California to Texas and other U.S. locations.”
Source: Meta drops fact-checking, loosens its content moderation rules










