Meta’s reliance on AI could already be putting the company in trouble

During the July earnings call, Meta CEO Mark Zuckerberg laid out a vision for his company’s valuable advertising services once they’re further expanded. supported by artificial intelligence.

“In the coming years,” he said, “AI will also be able to generate creative content for advertisers and personalize it based on the way people see it.”

But as the trillion-dollar company hopes to revolutionize its ad technology, Meta’s use of AI may have already gotten the company into trouble.

On Thursday, a bipartisan group of lawmakers, led by Republican Rep. Tim Walberg of Michigan and Democratic Rep. Kathy Castor of Florida, sent a letter to Zuckerberg demanding that the CEO answer questions about Meta’s advertising services.

The letter was written in response to a March report in the Wall Street Journal that revealed federal prosecutors are investigating the company for its role in illegal drug sales through its platforms.

“Meta appears to continue to shirk its social responsibility and ignore its own community guidelines,” the letter reads. “Protecting online users, especially children and teens, is one of our top priorities. We have continued concerns that Meta is not up to the task and this dereliction of duty must be addressed.”

Zuckerberg had already faced senators who questioned the CEO about safety measures for children using Meta’s social media sites. During the Senate hearing, Zuckerberg stood up and apologized to families who felt social media use was harming their children.

In July, the Tech Transparency Project, a nonprofit monitoring organization, reported that Meta was still making money from hundreds of ads promoting the sale of illegal or recreational drugs. drugs, including cocaine and opioids, which Meta prohibits in its policy regarding advertisements.

“Many of the ads made no secret of their intentions, showing pictures of bottles of prescription drugs, stacks of pills and powders, or blocks of cocaine, and encouraging users to place orders,” the watchdog wrote.

“Our systems are designed to proactively detect and enforce infringing content, and we reject hundreds of thousands of ads that violate our drug policies,” a Meta spokesperson told Business Insider, reiterating a statement shared with the Journal. “We will continue to invest resources and improve our enforcement of this type of content. Our hearts go out to those suffering the tragic consequences of this epidemic — it will take all of us working together to stop it.”

The spokesperson did not elaborate on how Meta uses AI to moderate ads.

Ads Poke Holes in Meta’s AI System

The exact procedures by which Meta approves and moderates ads are not public.

What is known is that the company relies in part on artificial intelligence to screen content, as reported by the Journal. The outlet reported that using photos to showcase the drugs could allow the ads to slip past Meta’s moderation system.

Here’s what Meta revealed about its “ad rating system”:

“Our ad review system relies primarily on automated technology to apply the Advertising Standards to the millions of ads served through Meta Technologies. However, we do use human reviewers to improve and train our automated systems, and in some cases to manually review ads.”

The company also says it is continually working to further automate the review process so that fewer people are needed.

But the revelation of drug-promoting ads on Meta’s platforms shows how policy-violating content can still slip through the automated system, while Zuckerberg paints a picture of a sophisticated ad service that promises better targeting and content creation for advertisers with generative AI.

Meta’s Troubled AI Rollout

Meta has experienced a rocky rollout of its AI-driven services outside of ad tech.

Less than a year after Meta introduced celebrity AI assistants, the company discontinued the product and focused on giving users the ability to create their own AI bots.

Meta also continues to troubleshoot Meta AI, the company’s chatbot and AI assistant, which can hallucinate answers or, as BI’s Rob Price does, act like a user and give out their phone number to strangers.

The technical and ethical issues surrounding AI products, not just Meta’s, are of concern to many major US companies.

A study by Arize AI, which researches AI technology, found that 56% of Fortune 500 companies consider AI to be a “risk factor,” The Financial Times reports.

The report found that 86% of technology companies, including Salesforce, say AI poses a business risk when splitting up the sector.

However, these concerns clash with the clear push by tech companies to implement AI into every aspect of their products, even as the path to profitability remains unclear.

“There are significant risks associated with the development and deployment of AI,” Meta said in a 2023 annual report, “and there can be no assurance that the use of AI will enhance our products or services or benefit our business, including our efficiency or profitability.”

You May Also Like

More From Author