Listen to this article
Produced by ElevenLabs and News Over Audio (NOA) using AI narration.
Many teens and adults use the word addictive when describing social-media sites, as if the apps themselves are laced with nicotine. The U.S. surgeon general, Vivek Murthy, wants to drive that point home as glaringly as possible: In an op-ed published by The New York Times yesterday, he writes that the country should start labeling such sites as if they’re cigarettes.
Murthy proposes putting an official surgeon’s-general warning—the same type found on tobacco and alcohol products—on social-media websites to “regularly remind parents and adolescents that social media has not been proved safe.” Such a warning would require formal congressional approval. To make his case, Murthy cites a 2019 study that found that adolescents who spend more than three hours a day on social media may be at higher risk for certain mental-health problems; he also pointed to research in which teens reported that social media made them feel worse about their body. “The moral test of any society is how well it protects its children,” he writes. “Why is it that we have failed to respond to the harms of social media when they are no less urgent or widespread than those posed by unsafe cars, planes or food?”
It’s a radical idea, and one with a real basis in science: There is strong evidence that tobacco warnings work, David Hammond, a professor in the school of public-health sciences at Canada’s University of Waterloo, told me. Although no intervention is perfect, such labels reduce tobacco use by reaching the right audience at the moment of consumption, Hammond said, and they are particularly effective at deterring young people. But social media is not tobacco. Some platforms have no doubt caused real harm to many children, but research into the effects of social media on young people has been a mixed bag; even the studies cited by Murthy are not as straightforward as presented in the op-ed. A warning label on a pack of cigarettes is attention-grabbing and succinct: No one wants cancer or heart disease. Social media does not boil down as easily.
What would a social-media warning look like? Murthy doesn’t go into further detail in his article, and nothing would be decided until Congress authorized the label. (It’s unclear how likely it is to pass, but there has been bipartisan interest in the topic, broadly speaking; earlier this year, at a congressional hearing on kid safety on the internet, members from both parties expressed frustration with Big Tech CEOs.) It could be a persistent pop-up that a user has to click out of each time they open an app. Or it could be something that shows up only once, in the footer, when a person creates an account. Or it could be a banner that never goes away. To be effective, Hammond told me, the message must be “salient”—it should be noticeable and presented frequently.
Design may be the easy part. The actual warning text within a social app might be hard to settle on, because an absolute, causal link has not yet been shown between, say, Instagram and the onset of depression; by contrast, we know that smoking causes cancer, and why it does so. “One of the reasons that we have such a wide range of opinions is that the work still isn’t quite conclusive,” David S. Bickham of the Digital Wellness Lab at Boston Children’s Hospital, whose research on body image was cited in Murthy’s op-ed, told me. One major meta-analysis (a study of studies) found that the effect of digital technology on adolescent well-being was “negative but small”—“too small to warrant policy change.” (That paper has since been critiqued by researchers including Jean Twenge and Jonathan Haidt, who have contributed writing about teen smartphone use to The Atlantic; they argue that the study’s methodology resulted in an “underestimation” of the problem. The authors of the original study then “rejected” these critiques by providing additional analysis. And so this goes.) The very fact that there is so much debate doesn’t make for neat public-health recommendations.
In the absence of a firm conclusion, you can imagine a label that would use hedged language—“This app may have a negative effect on teens’ mental health depending on how it’s used,” for example—though such a diluted label may not be useful. I asked Devorah Heitner, the author of Growing Up in Public: Coming of Age in a Digital World, what she would recommend. For starters, she said, any warning should include a line about how lack of sleep harms kids (a problem to which late-night social-media use may contribute). She also suggested that the warning might address young people directly: “If I were going to put something on a label, it would be, like, ‘Hey, this can intensify any feelings you might already be having, so just be thoughtful about: Is this actually making me feel good? If it’s making me feel bad, I should probably put it away.”
If Murthy’s label does become a reality, another challenge will be figuring out what constitutes social media in the first place. We tend to think of the social web as a specific set of apps, including Facebook, Instagram, Snapchat, and TikTok. But plenty of sites with social components may fall into this category. Murthy papers over this challenge somewhat in his op-ed. When he writes, “Adolescents who spend more than three hours a day on social media face double the risk of anxiety and depression symptoms,” he is referring to a study that asked teens only whether they use “social networks like Facebook, Google Plus, YouTube, MySpace, Linkedin, Twitter, Tumblr, Instagram, Pinterest, or Snapchat.” These platforms do not all have a lot in common, and the study does not draw any definitive conclusions about why using such platforms might be associated with an increased risk of mental-health problems. Murthy’s proposal doesn’t make clear which sites would be required to declare that they are associated with negative health outcomes. Would Roblox or Fortnite qualify? Or a newspaper with a particularly vibrant comments section?
Practical concerns aside, experts I spoke with also worried that the label puts the onus on kids and their parents rather than on the technology companies that make these sites. This is something Murthy acknowledges in his essay, noting that labeling alone won’t make social media safe for kids. “I don’t want the labels to let the social-media companies off the hook, right? Like, Oh, well, we labeled our harmful thing,” Heitner said. In other words, a warning alone may not solve whatever problems social apps might be causing.
Murthy’s proposal comes at a time when parents seem especially desperate to keep teens safe online. Haidt’s latest book about smartphones and kids, The Anxious Generation, has been on the New York Times best-seller list for weeks. Haidt told me over email that he applauds the surgeon general for calling for such labels: “We as a country are generally careful about the consumer products and medications that harm small numbers of children. Yet we have done nothing, absolutely nothing, ever, to protect children from the main consumer product they use every day.”
People are frightened. But fear isn’t always the best way to help young people. “The science simply does not support this action and issuing advisories based on fear will only weaken our trust in the institutions that wield them in this way,” Candice L. Odgers, a psychology professor at UC Irvine who studies how adolescents use digital technology (and recently wrote her own article on social-media panic for The Atlantic), told me over email. “It is time to have a real conversation about adolescent mental health in this country versus simply scapegoating social media.”