What the FTC Learned About Social Media

Thursday, September 26, 2024

Digital Beat

During the Trump Administration, the Federal Trade Commission ordered nine of the largest social media and video streaming services—Amazon, Facebook (which is now Meta), YouTube, Twitter (now known as X), Snap, ByteDance (which owns TikTok), Discord, Reddit, and WhatsApp—to provide data on how they collect, use, and present personal information, their advertising and user engagement practices, and how their practices affect children and teens.

On September 19, 2024, the FTC released findings on how social media and video streaming companies harvest an enormous amount of Americans’ personal data and monetize it to the tune of billions of dollars a year. In A Look Behind the Screens Examining the Data Practices of Social Media and Video Streaming Services, the FTC shared information about how these platforms collect, track and use personal and demographic information, how they determine which ads and other content are shown to consumers, whether and how they apply algorithms or data analytics to personal and demographic information, and how their practices impact children and teens.

Here we take a quick look at the FTC’s major findings and recommendations around five issues: data practices, advertising, algorithms and data analytics, children and teens, and competition. 

I. Data Practices  

The companies all reported collecting data, including personal information,1 from and about consumers. The companies collected data not only about consumers’ activity on their services but also about consumers’ activity off of the services. In some cases, companies were able to collect data on people who weren’t even registered for their services. All of this consumer information was used for targeted advertisements; content promotion and user engagement; and inferring or deducing even more information about the user. The companies design their often opaque systems and procedures to share consumers’ data, leaving consumers in the dark about the breadth of sharing, the parties with whom the information is shared, and the purposes of that disclosure. 

In general, the FTC finds that:

  • Companies fail to adequately police their data handling.  
  • Companies are in the best position to implement privacy protective measures—but they often do not. 
  • Companies’ reported practices do not consistently make consumers’ privacy a priority.

FTC staff recommends that:

  • Congress should enact comprehensive federal privacy legislation that limits surveillance and grants consumers data rights. At a minimum, such comprehensive federal privacy legislation should grapple with the fact that protecting users’ privacy requires addressing the business incentives that firms currently face, which often pit user privacy against monetization.  Users should have baseline protections, such as default safeguards against the over-collection, monetization, disclosure, or undue retention of personal data.  Users should be able to proactively choose whether they do or do not wish to be tracked, and they should be able to make this choice freely, rather than under conditions that restrict their autonomy.  Users should also have the right to access the data collected from or about them and to delete data collected from or about them.  Any comprehensive federal privacy legislation should also include a strong data minimization framework and require that social media and video streaming services maintain uniform and specific data retention and deletion practices, preventing them from keeping consumer data for any ambiguous and vague “business purpose.” 
  • Companies can and should do more to protect consumers’ privacy. 
  • Companies should adopt privacy policies that are clear, simple, and easily understood. 

II. Advertising Practices 

Advertising—and, in particular, targeted advertising—powers the business model of many of the companies and accounts for most of their revenue. Companies offered advertisers precise targeting of ads to users who exhibited specific characteristics and met defined criteria. This targeting was achieved through the collection of enormous volumes of user data gathered in a multitude of ways. 

Targeted advertising can pose serious privacy risks to consumers—and it is far from clear that users know the extent to which they are being tracked when they engage with a given service. it is even more unlikely that consumers know that even their activity off of many services may be automatically collected and shared with the companies.

The FTC finds that:

  • The collection of personal data by many of the companies subject users to the risk of significant privacy invasions and other risks, including from ad targeting. 
  • Companies that deploy targeted advertising also extensively track their users. 
  • The lack of transparency in the companies’ advertising ecosystem prevents users from being aware their data is being collected and packaged for ad targeting. 

FTC staff recommends that:

  • Ad targeting restrictions, particularly restrictions based on sensitive categories, would benefit from greater clarity and consistency.
  • Companies should not collect users’ sensitive information via privacy-invasive ad-tracking technologies. 

III. Algorithms, Data Analytics, and AI 

The companies reported broad usage of algorithms and data analytics,2 including applying them to consumer personal information and demographic information3 in order to analyze, process, and infer information, and to automate decisions or outcomes that power their services and the user and non-user experiences on the services. 

Artificial intelligence (AI) is an ambiguous term with many possible definitions, but it “often refers to a variety of technological tools and techniques that use computation to perform tasks such as predictions, decisions, or recommendations.”

All of the companies referenced using algorithms, data analytics, or AI. Most used these technologies for a variety of decision-making functions that the companies described as key to the functioning of their services at a large scale, including the services directed to children and those that permit teens to use their services. The most commonly reported uses were:  

  • for content recommendation, personalization, and search functionality, and to boost and measure user engagement; 
  • for content moderation purposes or in connection with safety and security efforts;  
  • to target and facilitate advertising;  
  • to infer information about users; and
  • for other business purposes, such as to inform internal strategic business decisions or to conduct research. 

Other notable uses included to assist with the deployment of special effects, or to assist with enabling language translations or accessibility features such as closed captioning. 

Key findings on algorithms, data analytics, and AI include:

  • Many companies rely heavily on the use of algorithms, data analytics, and/or AI—and the ingestion of personal information—to power their services. 
  • The companies generally feed extensive amounts of personal information into their automated systems, some of which is sourced from the users themselves—but they also often collect information passively about users’ and non-users’ activities across the Internet and in the real world (i.e., location information), and some companies collect information from data brokers and other third parties.
  • Consumer harms are further compounded where systems have the effect of being biased or unreliable, or where they can be used to infer sensitive information about individuals, such as by labeling them into sensitive demographic categories.

FTC staff recommends that:

  • Companies should address the lack of access, choice, control, transparency, explainability, and interpretability relating to their use of automated systems. 
  • Companies should implement more stringent testing and monitoring standards.
  • Legislation and regulation are badly needed. Self-regulation is failing. In particular, rules that set forth clear guardrails for the responsible use of automated systems are overdue. The companies’ differing, inconsistent, and sometimes inadequate approaches to monitoring and testing is one example of why robust regulatory tools and radically more transparency are needed.

IV. Children and Teens 

While many social media platforms state that they are only for those thirteen and over, it is well known that both children and teens use them. Research indicates that approximately 95 percent of teenagers and 40 percent of children between the ages of eight and 12 years old use some form of social media.

While the use of social media and digital technology can provide many positive opportunities for self-directed learning, forming community, and reducing isolation, it also has been associated with harms to physical and mental health. These harms include exposure to bullying, online harassment, child sexual exploitation, and exposure to content that may exacerbate mental health issues, such as the promotion of eating disorders, among other things.

The Children’s Online Privacy Protection Rule (COPPA) imposes certain requirements on operators of websites or online services directed to children under 13 years of age, and on operators of other websites or online services that have actual knowledge that they are collecting personal information online from a child under 13 years of age. Before a company covered by the COPPA Rule collects personal information from children, it must provide direct notice to parents and obtain verifiable parental consent. The COPPA Rule also requires that operators must:

  • post a clear and comprehensive online privacy policy describing their information practices for personal information collected online from children;
  • provide parents access to their child’s personal information to review or have the information deleted;
  • give parents the opportunity to prevent further use or online collection of a child’s personal information;
  • maintain the confidentiality, security, and integrity of information they collect from children, including by taking reasonable steps to release such information only to parties capable of maintaining its confidentiality and security;
  • retain personal information collected online from a child for only as long as is necessary to fulfill the purpose for which it was collected and delete the information using reasonable measures to protect against its unauthorized access or use; and
  • not condition a child’s participation in an online activity on the child providing more information than is reasonably necessary to participate in that activity.

The FTC enforces the COPPA Rule.

Key findings include:

  • Social media platforms bury their heads in the sand when it comes to children using their services. 
  • Only some social media services have a process by which parents/legal guardians can request access to the personal information collected from their child.
  • The platforms often treat teens as if they are traditional adult users.

FTC staff recommends that:

  • COPPA should be the floor, not the ceiling. Platforms should view the COPPA Rule as representing the minimum requirements and provide additional safety measures for children as appropriate.  
  • Social media services should not ignore the reality that there are child users.
  • Social media companies should provide parents/legal guardians an easy way to manage their child’s personal information.
  • Social media companies should do more to protect teen users of their services.
  • Congress should enact federal privacy legislation that protects teen users online. COPPA applies only to children. Society does not recognize teens as adults, but there is no legislation afforded to protect teens’ privacy in the digital world. For many reasons, teens should be afforded legal rights and protections that take into account the unique harms posed to them online and the significant developmental changes they undergo.

V. Competition

Lack of competition can lead to consumer harm. When companies eliminate competitive threats and do not face adequate competitive checks, quality, innovation, and customer service suffer. Competition is harmed by unlawful mergers or illegal agreements that lead to market power and decreased competition. Competition is also impaired when dominant firms exclude actual or potential rivals through exclusionary or other unlawful conduct.

In digital markets, acquiring and maintaining access to significant user data can be a path to achieving market dominance and building competitive moats that lock out rivals. The competitive value of user data can incentivize firms to prioritize acquiring it, even at the expense of user privacy and sometimes the law. Similarly, market concentration and market power can incentivize firms to deal with others on exclusionary terms or in ways that entrench and enhance their market position. Data abuse can raise entry barriers and fuel market dominance, and market dominance can, in turn, further enable data abuses and practices that harm consumers in an unvirtuous cycle.  Meanwhile, the lack of competition can mean that firms are not competitively pressured to improve product quality with respect to privacy, data collection, and other service terms, such that users lack real choice and are forced to surrender to the data practices of a dominant company or a limited set of firms.  

 FTC staff recommends that:

  • Antitrust enforcers should carefully scrutinize potential anticompetitive acquisitions and conduct.  
  • The staff recommendations above could also promote competition. 

 

FTC commissioners voted 5-0 to issue this staff report.

For more, see A Look Behind the Screen: Examining the Data Practices of Social Media and Video Streaming Services.

Notes

  1. Here “personal information” is defined as information about a specific individual or device, including: (1) first and last name; (2) home or other physical address, including street name and name of city or town, or other information about the location of the individual, Including but not limited to location from cellular tower information, fine or coarse location, or GPS coordinates; (3) e-mail address or other online contact information, such as an instant Messaging user identifier or screen name; (4) telephone number; (5) a persistent identifier, such as a customer number held in a ‘cookie,’ a static Internet Protocol (‘IP’) address, a device identifier, a device fingerprint, a hashed identifier, or a processor serial number; (6) nonpublic communications and content, Including, but not limited to, e-mail, text messages, contacts, photos, videos, audio, or other digital images or audio content; (7) Internet browsing history, search history, or list of URLs visited; (8) video, audio, cable, or TV viewing history; (9) biometric data; (10) health or medical information; (11) Demographic Information or (12) any other information associated with that user or device.
  2. Here “algorithms or data analytics” is defined as the process of examining and analyzing data in order to find patterns and make conclusions about that data, whether by machine or human analyst.
  3. Demographic information is characteristics of human populations, such as age, ethnicity, race, sex, disability, and socio-economic information.

 

You May Also Like

More From Author