"Governments coordinate with social media platforms to suppress certain viewpoints."
Related Claims
Evidence10
On August 26, 2024, Meta CEO Mark Zuckerberg wrote to House Judiciary Chairman Jim Jordan stating that Biden administration senior officials "repeatedly pressured" Meta in 2021 to censor COVID-19 content, including humor and satire, and that he regrets not pushing back more forcefully.
In this letter, Zuckerberg disclosed that senior officials from the Biden administration, including the White House, pressured Meta''s teams for months to censor certain COVID-19 content. He wrote that the government pressure was wrong and that he regrets Meta was not more outspoken about it.
He also revealed that the FBI warned Meta about a potential Russian disinformation operation involving the Biden family ahead of the 2020 election, which led Meta to temporarily reduce the visibility of the New York Post''s Hunter Biden laptop story while it was being fact-checked. Zuckerberg stated Meta would not repeat such actions and would resist government pressure in either political direction going forward.
In this letter, Zuckerberg disclosed that senior officials from the Biden administration, including the White House, pressured Meta''s teams for months to censor certain COVID-19 content. He wrote that the government pressure was wrong and that he regrets...
On July 4, 2023, U.S. District Judge Terry Doughty issued a 155-page ruling in Missouri v. Biden finding the plaintiffs were likely to succeed in proving the government coerced social media companies, and barred multiple federal agencies from pressuring platforms to suppress protected speech.
Judge Doughty''s ruling found that the White House, Surgeon General''s office, CDC, FBI, and CISA had engaged in a pattern of pressuring social media companies to remove or suppress content the government disfavored. The ruling stated the plaintiffs were likely to succeed on the merits in establishing that the government had used its power to silence the opposition.
Doughty wrote that the White House made it very clear to social media companies what they wanted suppressed and what they wanted amplified, and that faced with unrelenting pressure from the most powerful office in the world, the social media companies apparently complied.
The injunction barred named officials and agencies from contacting social media platforms for the purpose of urging, encouraging, pressuring, or inducing the removal of content containing protected free speech, with narrow exceptions for criminal activity and national security threats.
Judge Doughty''s ruling found that the White House, Surgeon General''s office, CDC, FBI, and CISA had engaged in a pattern of pressuring social media companies to remove or suppress content the government disfavored. The ruling stated the plaintiffs were...
On October 3, 2023, the Fifth Circuit Court of Appeals ruled that the White House and Surgeon General's office likely coerced social media platforms through intimidating messages and threats of adverse consequences, violating the First Amendment.
The Fifth Circuit affirmed the core finding that the government likely violated the First Amendment, though it significantly narrowed the scope of the district court''s injunction. The court found that the White House, acting in concert with the Surgeon General''s office, likely coerced the platforms to make their moderation decisions by way of intimidating messages and threats of adverse consequences, and significantly encouraged the platforms'' decisions by commandeering their decision-making processes.
The court distinguished between permissible government persuasion and impermissible coercion, establishing a legal framework for evaluating government-platform communications. It also added CISA (the Cybersecurity and Infrastructure Security Agency) to the list of government entities whose contacts with social media companies were restricted by the injunction.
The Fifth Circuit affirmed the core finding that the government likely violated the First Amendment, though it significantly narrowed the scope of the district court''s injunction. The court found that the White House, acting in concert with the Surgeon...
The sixth Twitter Files installment revealed that the FBI maintained a dedicated channel with Twitter, sent regular lists of accounts and tweets for review, and that Twitter received over $3.4 million from the FBI between October 2019 and February 2021 for processing requests.
Reporter Michael Shellenberger disclosed on December 16, 2022, based on internal Twitter documents, that the FBI operated a Foreign Influence Task Force that swelled to 80 agents and maintained regular contact with Twitter''s Trust and Safety team. Between January 2020 and November 2022, there were over 150 emails between the FBI and Twitter''s then-head of Trust and Safety, Yoel Roth.
The FBI routinely sent lists of specific accounts and individual tweets to Twitter for review. Internal Twitter communications showed the company had collected $3,415,323 from the FBI since October 2019 as reimbursement for processing the agency''s requests. Many of the flagged tweets came from low-follower accounts posting satirical content about elections.
Reporter Michael Shellenberger disclosed on December 16, 2022, based on internal Twitter documents, that the FBI operated a Foreign Influence Task Force that swelled to 80 agents and maintained regular contact with Twitter''s Trust and Safety team. Between...
On March 9, 2023, journalist Matt Taibbi testified under oath before Congress that internal Twitter documents revealed a systematic process by which federal agencies, including the FBI and DHS, regularly flagged content for removal and that Twitter often complied.
Taibbi''s sworn testimony described the findings from his review of thousands of internal Twitter documents provided by new owner Elon Musk. He testified that multiple government agencies maintained regular channels of communication with Twitter to flag content they considered problematic, and that these requests often resulted in content moderation actions.
Taibbi described a system in which government agencies sent batches of accounts and specific tweets to Twitter for review, and that Twitter''s staff treated these requests with a high degree of deference. His testimony covered the suppression of the Hunter Biden laptop story, the handling of COVID-related content, and the broader pattern of government involvement in content moderation decisions at social media platforms.
Taibbi''s sworn testimony described the findings from his review of thousands of internal Twitter documents provided by new owner Elon Musk. He testified that multiple government agencies maintained regular channels of communication with Twitter to flag...
The 19th Twitter Files installment revealed that the Stanford Internet Observatory's Virality Project flagged "stories of true vaccine side effects" as misinformation for suppression by Twitter, Facebook, Google, and TikTok through a centralized cross-platform ticketing system.
Matt Taibbi''s reporting on March 17, 2023 detailed the Virality Project, a collaboration between the Stanford Internet Observatory, several academic institutions, and government-affiliated entities. The project created a centralized ticketing system that processed content flagging requests across multiple platforms simultaneously, including Facebook, Google, TikTok, YouTube, Pinterest, Medium, and Twitter.
Internal emails showed the Virality Project classified stories of true vaccine side effects and viral posts of individuals expressing vaccine hesitancy as content warranting platform action. The project effectively served as an intermediary between government health agencies and social media companies, providing a mechanism for coordinated content suppression that could be directed at posts containing factually accurate information.
Stanford later stated the project did not censor or ask social media platforms to remove any content, characterizing the portrayals as distortions of email exchanges.
Matt Taibbi''s reporting on March 17, 2023 detailed the Virality Project, a collaboration between the Stanford Internet Observatory, several academic institutions, and government-affiliated entities. The project created a centralized ticketing system that...
A June 2023 congressional staff report documented how CISA expanded beyond its cybersecurity mission to operate a "switchboarding" program that routed content flagging requests from government officials to social media companies, and scrubbed its website of references to these activities when scrutinized.
The House Committee on the Judiciary and the Select Subcommittee on the Weaponization of the Federal Government published this interim staff report based on subpoenaed documents, depositions, and internal communications. It detailed how CISA developed and maintained infrastructure that allowed state, local, and federal officials to identify social media content they deemed to be misinformation and route it to platforms for potential action.
The report alleged that CISA worked through intermediary organizations, particularly the Election Integrity Partnership and the Center for Internet Security, to carry out content flagging while maintaining a degree of separation from direct government action.
When public scrutiny of these activities intensified, the report found that CISA scrubbed its website of references to its domestic content monitoring work. CISA stated it discontinued switchboarding for the 2022 election cycle, though the report found that the Center for Internet Security continued similar activities during that period.
The House Committee on the Judiciary and the Select Subcommittee on the Weaponization of the Federal Government published this interim staff report based on subpoenaed documents, depositions, and internal communications. It detailed how CISA developed and...
On December 18, 2023, the European Commission opened formal proceedings against X under the Digital Services Act, and in December 2025 imposed a 120-million-euro fine, using regulatory authority to enforce government content standards on the platform.
The EU''s Digital Services Act, which took full effect in February 2024, grants the European Commission direct enforcement authority over very large online platforms. The Commission opened its investigation into X citing concerns about the dissemination of illegal content, the effectiveness of measures against information manipulation, dark patterns, advertising transparency, and researcher data access.
The DSA requires platforms to conduct risk assessments and implement mitigation measures for risks including the spread of illegal content and negative effects on civic discourse. Article 9 empowers national judicial or administrative authorities to issue orders requiring platforms to act against specific items of illegal content, creating a formal legal mechanism for government-directed content removal.
The proceedings against X represented the first major enforcement action under the DSA framework and demonstrated the EU''s approach to establishing government authority over platform content decisions through regulation rather than informal pressure.
The EU''s Digital Services Act, which took full effect in February 2024, grants the European Commission direct enforcement authority over very large online platforms. The Commission opened its investigation into X citing concerns about the dissemination of...
The UK Online Safety Act, passed October 26, 2023, grants regulator Ofcom enforcement powers including fines of up to 10% of global revenue and gives the Secretary of State power under Section 44 to direct Ofcom to modify content safety codes of practice.
The Online Safety Act establishes a comprehensive regulatory framework that gives the UK government significant influence over how platforms moderate content. Section 44 grants the Secretary of State the power to direct Ofcom to modify draft codes of practice for reasons of public policy, national security, or public safety, and Ofcom must comply.
The Act requires platforms to conduct risk assessments for illegal content and implement systems to prevent users from encountering it. Ofcom, appointed as the independent regulator, has enforcement powers including the ability to investigate non-compliance, impose fines of up to 10% of qualifying worldwide revenue, and in the most serious cases, apply to courts to block services entirely from the UK.
While the Act includes protections for journalistic and democratically important content, critics have raised concerns that the broad definitional framework and the Secretary of State''s directive powers create mechanisms through which government priorities can shape what content platforms allow or remove.
The Online Safety Act establishes a comprehensive regulatory framework that gives the UK government significant influence over how platforms moderate content. Section 44 grants the Secretary of State the power to direct Ofcom to modify draft codes of...
Twitter Files Part 15 revealed that the Hamilton 68 dashboard, widely cited by media and officials as tracking "Russian bots," was based on 644 accounts that Twitter's own head of Trust and Safety found were largely neither Russian nor bots — only 36 of 644 were registered in Russia.
The Hamilton 68 dashboard was created by former FBI special agent Clint Watts and operated under the Alliance for Securing Democracy, a project of the German Marshall Fund. It was widely cited by journalists, lawmakers, and government officials as an authoritative source for tracking Russian influence operations on Twitter.
Matt Taibbi''s reporting on January 27, 2023, based on internal Twitter documents, showed that when Twitter''s head of Trust and Safety, Yoel Roth, investigated the accounts tracked by the dashboard, he found that only 36 of the 644 identifiable accounts were registered in Russia. Internal Twitter communications stated these accounts were neither strongly Russian nor strongly bots and there was no evidence to support the statement that the dashboard was a finger on the pulse of Russian information operations.
Despite Twitter employees'' internal concerns, the company chose not to publicly challenge the dashboard''s credibility. The Alliance for Securing Democracy later acknowledged that not all accounts on the dashboard were controlled by Russia.
The Hamilton 68 dashboard was created by former FBI special agent Clint Watts and operated under the Alliance for Securing Democracy, a project of the German Marshall Fund. It was widely cited by journalists, lawmakers, and government officials as an...