Credit...Sam Whitney/The New York Times

Opinion | The TikTokification of Social Media May Finally Be Its Undoing

by · NY Times

During a recent rebranding tour, sporting Gen Z-approved tousled hair, streetwear and a gold chain, the Meta chief Mark Zuckerberg let the truth slip: Consumers no longer control their social-media feeds. Meta’s algorithm, he boasted, has improved to the point that it is showing users “a lot of stuff” not posted by people they had connected with and he sees a future in which feeds show you “content that’s generated by an A.I. system.”

Spare me. There’s nothing I want less than a bunch of memes of Jesus-as-a-shrimp, pie-eating cartoon cats and other A.I. slop added to all the clickbait already clogging my feed. But there is a silver lining: Our legal system is starting to recognize this shift and hold tech giants responsible for the effects of their algorithms — a significant, and even possibly transformative, development that over the next few years could finally force social media platforms to be answerable for the societal consequences of their choices.

Let’s back up and start with the problem. Section 230, a snippet of law embedded in the 1996 Communications Decency Act, was initially intended to protect tech companies from defamation claims related to posts made by users. That protection made sense in the early days of social media, when we largely chose the content we saw, based on whom we “friended” on sites such as Facebook. Since we selected those relationships, it was relatively easy for the companies to argue they should not be blamed if your Uncle Bob insulted your strawberry pie on Instagram.

Then, of course, things got a little darker. Not everything Uncle Bob shared was accurate, and the platforms’ algorithms prioritized outrageous, provocative content from anyone with internet access over more neutral, fact-based reporting. Despite this, the tech companies’ lawyers continued to argue, successfully, that they were not responsible for the content shared on their platforms — no matter how misleading or dangerous.

Section 230 now has been used to shield tech from consequences for facilitating deadly drug sales, sexual harassment, illegal arms sales and human trafficking. And in the meantime, the companies grew to be some of the most valuable in the world.

Then came TikTok. Following the wild popularity of TikTok’s “For You” algorithm, which selects bite-size videos to be fed to the passive viewer, social networks are increasingly having us watch whatever content their algorithms have chosen, often pushing to the sidelines the posts of accounts we had actually chosen to follow.

As annoying as this development has been, it could be beneficial in the fight to gain more control of our online lives. If tech platforms are actively shaping our experiences, after all, maybe they should be held liable for creating experiences that damage our bodies, our children, our communities and our democracy.

In August, the U.S. Court of Appeals for the Third Circuit ruled that TikTok was not immune to a legal challenge regarding its algorithm, which disseminated dangerous videos promoting a “blackout challenge” showing people strangling themselves until they passed out. TikTok delivered a video of the challenge to a 10-year-old girl named Nylah Anderson, who tried to emulate it and killed herself.

Placing the video on Nylah’s feed “was TikTok’s own ‘expressive activity,’ and thus its first-party speech,” Judge Patty Shwartz wrote. The judge, writing for a three-judge panel, rejected the company’s defense that the video was made by a third party and thus protected by Section 230. (TikTok has petitioned the Third Circuit to rehear its case with a broader panel of judges.)

In a similar vein, the Superior Court of the District of Columbia ruled last month that Meta could not use Section 230 as a shield against a lawsuit by the district’s attorney general alleging, among other things, that the company’s “personalization algorithms” were designed to be addictive for children, as are other harmful features such as infinite scroll and frequent alerts. There are additional pending cases across the globe alleging tech-company culpability for the distribution of nonconsensual A.I.-generated nude images, hate speech and scams.

The issue is likely to end up at the Supreme Court. In July, the justices returned two challenges to state laws that restrict the power of social media companies to moderate content to lower courts, without addressing the implications for Section 230. Justice Clarence Thomas, though, has repeatedly signaled that he is eager for a chance to whittle away at Section 230’s protections.

If the court holds platforms liable for their algorithmic amplifications, it could prompt them to limit the distribution of noxious content such as nonconsensual nude images and dangerous lies intended to incite violence. It could force companies including TikTok to ensure they are not algorithmically promoting harmful or discriminatory products. And, to be fair, it could also lead to some overreach in the other direction, with platforms having a greater incentive to censor speech.

My hope is that the erection of new legal guardrails would create incentives to build platforms that give control back to users. It could be a win-win: We get to decide what we see, and they get to limit their liability.

In the meantime, there are alternatives. I’ve already moved most of my social networking to Bluesky, a platform that allows me to manage my content moderation settings. I also subscribe to several other feeds — including one that provides news from verified news organizations and another that shows me what posts are popular with my friends.

Of course, controlling our own feeds is a bit more work than passive viewing. But it’s also educational. It requires us to be intentional about what we are looking for — just as we decide which channel to watch or which publication to subscribe to.

This brings me to a very different kind of lawsuit. A professor at the University of Massachusetts in Amherst named Ethan Zuckerman is suing Meta, arguing that Section 230 gives him the right to release a tool that helps Facebook users to control their feeds.

I hope he succeeds. Giving power back to the users would not only be good for us as citizens, and it would also test the tech companies’ longstanding argument that the problems with social media is what we are doing to ourselves — not what they are doing to us.

Source photographs by PeopleImages and Antonio_Diaz/Getty Images

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.

Follow the New York Times Opinion section on Facebook, Instagram, TikTok, WhatsApp, X and Threads.