Meta CEO Mark Zuckerberg leaves the Federal Courthouse in downtown Los Angeles after defending the company in a landmark social media addiction trial in Los Angeles, United States, on February 19, 2026.
Jon Putman | Anadolu | Getty Images
Over a decade ago, Meta – then known as Facebook – hired researchers in the social sciences with the goal of analyzing how the social network’s services were impacting users. It was a way for the company and its peers to show they were serious about understanding the benefits and potential risks of their innovations.
But as Meta’s court losses this week illustrate, the researchers’ work can become a liability. Brian Boland, a former Facebook executive who testified in both trials — one in New Mexico and the other in Los Angeles — says the damning findings of Meta’s internal research and documents seemingly contradicted how the company portrayed itself in public. Juries in the two trials determined that Meta inadequately policed its site, putting kids in harm’s way.
Mark Zuckerberg’s company began clamping down on its research teams a few years ago after a Facebook researcher, Frances Haugen, became a prominent whistleblower. The newer crop of tech companies like OpenAI and Anthropic subsequently invested heavily in researchers and charged them with studying the impact of modern AI on users, and publishing their findings.
With AI now getting outsized attention for the harmful effects it’s having on some users, those companies must ask if it’s in their best interest to continue funding research, or to suppress it.
“There was a period of time when there were teams that were created internally who could start to look at things and, for a brief window, you had some absolutely outstanding researchers who were looking at what was happening on these products with a little bit more free rein than I understand they have today,” Boland said in an interview.
Meta’s two defeats this week centered on different cases but they had a common theme: The company didn’t share what it knew about its products’ harms with the general public.

Jury members had to evaluate millions of corporate documents, including executive emails, presentations and internal research conducted by Meta’s staff. The documents included internal surveys appearing to show a concerning percentage of teenage users receiving unwanted sexual advances on Instagram. There was also research, which Meta eventually halted, implying that people who curbed their use of Facebook became less depressed and anxious.
Plaintiffs’ attorneys in the cases didn’t rely solely on internal research to make their arguments, but those studies helped bolster their positions about Meta’s alleged culpability. Meta’s defense teams argued that certain research was old, taken out of context and misleading, presenting a flawed view of how the company operates and how it views safety.
‘Both sides of the story’
“The jury got to hear both sides of the story and a very fair presentation of the facts, and they got to make a decision based on what they saw,” Boland said. “And both juries, with very different cases, came back with clear verdicts.”
Meta and Google’s YouTube, which was also a defendant in the L.A. trial, said they would appeal.
Lisa Strohman, a psychologist and attorney who served as an in-house expert consultant for the New Mexico suit, said leaders at Meta and across the tech industry may have thought they could use internal research to their advantage, winning favor from the public.
“I think what they failed to recognize is that researchers are parents and family members,” Strohman said. “And I think that what they failed to realize was that these people weren’t going to be bought.”
Whatever public relations win executives were expecting backfired when the research began to spill out to the public. The most damaging incident for Meta took place in 2021, when Haugen, a former Facebook product manager turned whistleblower, leaked a trove of documents that suggested the company knew of the potential harms of its products.
Frances Haugen, former Facebook employee, speaks during a hearing of the Committee on Energy and Commerce Subcommittee on Communications and Technology on Capitol Hill December 1, 2021, in Washington, DC.
Brendan Smialowski | AFP | Getty Images
Haugen’s “disclosures were a significant turning point globally – not just for the companies themselves but for researchers, policymakers and the broader public,” said Kate Blocker, director of research and program at the nonprofit Children and Screens: Institute of Digital Media and Child Development.
The leaks also led to major changes at Meta and in the tech industry, which began to weed out research that could be viewed as counterproductive for the companies. Many teams studying alleged harms and related issues were cut, CNBC previously reported.
Some companies also began removing certain tools and features of their services that third-party researchers utilized to study their platforms.
“Companies may now view ongoing research as a liability, but independent, third-party research must continue to be supported,” Blocker said.
Much of the internal research used in this week’s trials didn’t include new revelations, and many of the documents were previously released by other whistleblowers, said Sacha Haworth, executive director of the Tech Oversight Project. What the trials added, Haworth said, were the “the very emails, the very words, the very screenshots, the internal marketing presentations, the memos,” that offered necessary context.
As the tech industry now pushes aggressively into AI, companies like Meta, OpenAI and Google have been prioritizing products over research and safety. It’s a trend that concerns Blocker, who said that, “much like with social media before it, there is limited public visibility into what AI companies are studying about their products.”
“AI companies seem to be mostly studying the models themselves – model behavior, model interpretability, and alignment – but there is a significant gap in research regarding the impact of chatbots and digital assistants on child development,” Blocker said. “AI companies have a chance to not repeat the mistakes of the past – we urgently need to establish systems of transparency and access that share what these companies know about their platforms with the public and support further independent evaluation.”
WATCH: Regulatory pressure to follow after landmark social media verdict.

https://image.cnbcfm.com/api/v1/image/108267283-1771506323326-gettyimages-2261841633-AA_19022026_2660817.jpeg?v=1774469393&w=1920&h=1080
2026-03-29 07:00:01















Leave a Reply