Are social media apps ‘dangerous products’? 2 scholars explain how the companies rely on young users but fail to protect them


PTI, Feb 5, 2024, 11:15 AM IST

Representative image (Source: Pexels)

“You have blood on your hands.”

“I’m sorry for everything you have all been through.”

These quotes, the first from Sen. Lindsey Graham, R-S.C., speaking to Meta CEO Mark Zuckerberg, and the second from Zuckerberg to families of victims of online child abuse in the audience, are highlights from an extraordinary day of testimony before the Senate Judiciary Committee about protecting children online.

But perhaps the most telling quote from the Jan. 31, 2024, hearing came not from the CEOs of Meta, TikTok, X, Discord or Snap but from Sen. Graham in his opening statement: Social media platforms “as they are currently designed and operate are dangerous products.”

We are university researchers who study how social media organizes news, information and communities. Whether or not social media apps meet the legal definition of “unreasonably dangerous products,” the social media companies’ business models do rely on having millions of young users. At the same time, we believe that the companies have not invested sufficient resources to effectively protect those users.

Mobile device use by children and teens skyrocketed during the pandemic and has stayed high. Naturally, teens want to be where their friends are, be it the skate park or on social media. In 2022, there were an estimated 49.8 million users age 17 and under of YouTube, 19 million of TikTok, 18 million of Snapchat, 16.7 million of Instagram, 9.9 million of Facebook and 7 million of Twitter, according to a recent study by researchers at Harvard’s Chan School of Public Health.

Teens are a significant revenue source for social media companies. Revenue from users 17 and under of social media was US$11 billion in 2022, according to the Chan School study. Instagram netted nearly $5 billion, while TikTok and YouTube each accrued over $2 billion. Teens mean green.

Social media poses a range of risks for teens, from exposing them to harassment, bullying and sexual exploitation to encouraging eating disorders and suicidal ideation. For Congress to take meaningful action on protecting children online, we identify three issues that need to be accounted for: age, business model and content moderation.

How old are you?

Social media companies have an incentive to look the other way in terms of their users’ ages. Otherwise they would have to spend the resources to moderate their content appropriately. Millions of underage users – those under 13 – are an “open secret” at Meta. Meta has described some potential strategies to verify user ages, like requiring identification or video selfies, and using AI to guess their age based on “Happy Birthday” messages.

However, the accuracy of these methods is not publicly open to scrutiny, so it’s difficult to audit them independently.

Meta has stated that online teen safety legislation is needed to prevent harm, but the company points to app stores, currently dominated by Apple and Google, as the place where age verification should happen. However, these guardrails can be easily circumvented by accessing a social media platform’s website rather than its app.

New generations of customers

Teen adoption is crucial for continued growth of all social media platforms. The Facebook Files, an investigation based on a review of company documents, showed that Instagram’s growth strategy relies on teens helping family members, particularly younger siblings, get on the platform. Meta claims it optimizes for “meaningful social interaction,” prioritizing family and friends’ content over other interests. However, Instagram allows pseudonymity and multiple accounts, which makes parental oversight even more difficult.

On Nov. 7, 2023, Auturo Bejar, a former senior engineer at Facebook, testified before Congress. At Meta he surveyed teen Instagram users and found 24% of 13- to 15-year-olds said they had received unwanted advances within the past seven days, a fact he characterizes as “likely the largest-scale sexual harassment of teens to have ever happened.” Meta has since implemented restrictions on direct messaging in its products for underage users.

But to be clear, widespread harassment, bullying and solicitation is a part of the landscape of social media, and it’s going to take more than parents and app stores to rein it in.

Meta recently announced that it is aiming to provide teens with “age-appropriate experiences,” in part by prohibiting searches for terms related to suicide, self-harm and eating disorders. However, these steps don’t stop online communities that promote these harmful behaviors from flourishing on the company’s social media platforms. It takes a carefully trained team of human moderators to monitor and enforce terms of service violations for dangerous groups.

Content moderation

Social media companies point to the promise of artificial intelligence to moderate content and provide safety on their platforms, but AI is not a silver bullet for managing human behavior. Communities adapt quickly to AI moderation, augmenting banned words with purposeful misspellings and creating backup accounts to prevent getting kicked off a platform.

Human content moderation is also problematic, given social media companies’ business models and practices. Since 2022, social media companies have implemented massive layoffs that struck at the heart of their trust and safety operations and weakened content moderation across the industry.

Congress will need hard data from the social media companies – data the companies have not provided to date – to assess the appropriate ratio of moderators to users.

The way forward

In health care, professionals have a duty to warn if they believe something dangerous might happen. When these uncomfortable truths surface in corporate research, little is done to inform the public of threats to safety. Congress could mandate reporting when internal studies reveal damaging outcomes.

Helping teens today will require social media companies to invest in human content moderation and meaningful age verification. But even that is not likely to fix the problem. The challenge is facing the reality that social media as it exists today thrives on having legions of young users spending significant time in environments that put them at risk. These dangers for young users are baked into the design of contemporary social media, which requires much clearer statutes about who polices social media and when intervention is needed.

One of the motives for tech companies not to segment their user base by age, which would better protect children, is how it would affect advertising revenue. Congress has limited tools available to enact change, such as enforcing laws about advertising transparency, including “know your customer” rules. Especially as AI accelerates targeted marketing, social media companies are going to continue making it easy for advertisers to reach users of any age. But if advertisers knew what proportion of ads were seen by children, rather than adults, they may think twice about where they place ads in the future.

Despite a number of high-profile hearings on the harms of social media, Congress has not yet passed legislation to protect children or make social media platforms liable for the content published on their platforms. But with so many young people online post-pandemic, it’s up to Congress to implement guardrails that ultimately put privacy and community safety at the center of social media design.

Authored By Joan Donovan, Assistant Professor of Journalism and Emerging Media Studies, Boston University, and Sara Parker, Research Analyst at the Media Ecosystem Observatory, McGill University (The Conversation)

Udayavani is now on Telegram. Click here to join our channel and stay updated with the latest news.

Top News

Ullal: Auto-rickshaw accident near Konaje claims driver’s life

Congress victory in bypolls not a clean chit to CM in MUDA case: R Ashoka

IPL 2025 | Got someone who can do captaincy job: Ricky Ponting on Shreyas Iyer

Bad timing: Fraudsters call senior Indore cop during press briefing to pull off ‘digital arrest’

Kangana says MVA lost in Maharashtra because it disrespected women

Ranbir Kapoor says he would love to remake grandfather Raj Kapoor’s ‘Shree 420’

I played with fearless mindset, took brave decisions: Yashasvi Jaiswal

Related Articles More

Mangaluru: Campco opposes WHO’s claim of arecanut being carcinogenic

10 month baby gets new heart, new life

World COPD Day: Know your lung function

As Delhi chokes with dangerous pollution levels, doctors warn of health risks for all

World Diabetes Day 2024: Kasturba Hospital Manipal Hosts Zumba Session at Malpe Beach to Raise Diabetes Awareness

MUST WATCH

Coconut Flower

Prakash Belawadi

Naxal Leader Vikram Gowda

Christmas Cake Fruit Mixing

DK Shivakumar


Latest Additions

Ullal: Auto-rickshaw accident near Konaje claims driver’s life

Congress victory in bypolls not a clean chit to CM in MUDA case: R Ashoka

IPL 2025 | Got someone who can do captaincy job: Ricky Ponting on Shreyas Iyer

Will review INDI alliance’s dismal performance in Maharashtra, says Tejashwi Yadav

Gangavati Railway Station to be renamed Anjanadri: M.B. Patil

Thanks for visiting Udayavani

You seem to have an Ad Blocker on.
To continue reading, please turn it off or whitelist Udayavani.