The US Supreme Court declined to rule on whether it was constitutional for states to restrict the power of social media companies to moderate their content.
The US Supreme Court declined to rule on whether it was constitutional for states to restrict the power of social media companies to moderate their content. AFP

The US Supreme Court on Monday sidestepped a ruling on the constitutional validity of a pair of Republican-backed laws that imposed restrictions on social media content moderation, sending legal challenges backed by tech platforms to lower courts for review.

Tech industry trade groups, which welcomed the decision, had challenged the laws passed in 2021 by Republican lawmakers in Florida and Texas as part of a broader pushback against perceived anti-conservative bias by major platforms such as Meta-owned Facebook and X, formerly Twitter.

The companies denied that they were censoring conservative viewpoints under the guise of content moderation, while their advocates argued that the laws quashed the platforms' own First Amendment rights under the US constitution.

The Supreme Court declined to rule on whether it was constitutional for states to limit the powers of platforms to moderate their content, leaving the two laws in limbo as it instructed the lower courts for a review.

Florida's measure bars social media platforms from pulling content from politicians, a law that was passed after former president Donald Trump was suspended from Twitter and Facebook in the wake of the January 6, 2021 assault on the US Capitol.

In Texas, the law stops sites from pulling content based on a "viewpoint" and is also intended to thwart what conservatives see as censorship by tech platforms such as Facebook and YouTube against right-wing ideas.

Neither law has gone into effect due to the litigation.

Jameel Jaffer, executive director of the Knight First Amendment Institute, hailed the "careful and considered" Supreme Court decision that rejected the broad arguments of the states and the platforms.

"It properly recognizes that platforms are 'editors' under the First Amendment, but it also dismisses, for good reasons, the argument that regulation in this sphere is categorically unconstitutional," Jaffer said.

The challenge to the laws were brought by associations representing big tech companies, the Computer & Communications Industry Association (CCIA) and NetChoice, who argue that the First Amendment allows platforms to have the freedom to handle content as they see fit.

"We are encouraged that a majority of the court has made clear that the government cannot tilt public debate in its favored direction," CCIA president Matt Schruers said in a statement.

"There is nothing more Orwellian than government attempting to dictate what speech should be carried, whether it is a newspaper or a social media site."

The decision was also welcomed by tech advocacy groups.

"The government does not have the right to impose rules on how companies like Meta and Google should accomplish" accountability, said Nora Benavidez, senior counsel at the watchdog Free Press.

"These laws would have further ratcheted up the amount of hate and disinformation online while undermining both the meaning and the intent of the First Amendment," she added.

But other advocates cautioned that the decision must not absolve tech firms of their responsibility to address threats to public safety and democracy.

"Today's unanimous opinion ensures platforms can enforce their community and safety standards during a critical election year," said Nicole Gill, executive director of the watchdog Accountable Tech.

"But make no mistake: this is not an excuse for platforms to continue to shrug off their role in the desecration of democracy and proliferation of a myriad of societal harms."

Monday's decision comes after the Supreme Court last week rejected a Republican-led bid to curb government contact with social media companies to moderate their content.

The decision handed a win to President Joe Biden's administration and top government agencies ahead of the presidential vote in November, allowing them to continue notifying major platforms including Facebook and X about what they deem as false or hateful content.