Arturo Bejar, former senior engineer and product leader at Facebook, testified to the Congressional Subcommittee on Privacy, Technology, and the Law on November 7, discussing teen’s experience with online harassment, insufficient reporting mechanisms, and recommendations on how to pressure Meta to reform.
His testimony was grounded in his time as an employee and what he has seen as a father of a teenage girl.
He implores to Congress: “It’s time that the public and parents understand the true level of harm posed by these ‘products’ and it’s time that young users have the tools to report and suppress online abuse.”
From 2009-2015, Bejar ran the Protect & Care team, overseeing engineering, product, user research, data, and design. The team focused on three core issues:
- Site integrity: stopping attacks and malicious behavior
- Security infrastructure: compliance improvement and engineered systems
- Care: user interface and internal customer care tools
Bejar reported to the CTO, who reported directly to Meta CEO Mark Zuckerberg.
The team tried and tested response surveys for those reporting inappropriate content and harassment. They designed a flow that teens were comprehensively engaging with, which gave them a better sense of negative user experience.
Upon his return as an independent consultant for “well-being” on Instagram in 2019, he found that the tools they had built for teenagers to get support for bullying and harassment were no longer available: “Almost all of the work that I and my colleagues had done during my earlier stint at Facebook through 2015 was gone.”
His new team had gathered troubling evidence that teens were experiencing distress and abuse on the platform.
He found that the number of people reporting to surveys that they had a negative experience on Instagram was 51% every week. Only 1% of those reported the offending content and only 2% of those succeeded in getting the offending content taken down.
The internal research team detailed the staggering levels of abuse that teens aged 13-15 experience weekly:
- 21.8% of 13-15 year olds said they were the target of bullying in the past seven days
- 39.4% of 13-15 year old children said they had experienced negative comparison, in the past seven days,
- 13% of 13-15 year old children received unwanted sexual advances in the past seven days.
“Looked at over time, it is likely the largest-scale sexual harassment of teens to have ever happened,” he tells the Committee; “one that demands action.”
He reported this information to Mark Zuckerberg, Adam Mosseri, and Sheryl Sandberg via email on October 5, 2021. Sandberg expressed sympathy, Mosseri asked for a follow up, Mark Zuckerberg didn’t respond at all. This was out of character for Zuckerberg, who usually responded to Bejar or would ask for follow ups.
Two years after this issue was brought to the executive’s attention, there is still no way for a minor to flag a conversation on Instagram to indicate it contains unwanted sexual advances.
Bejar’s observations reflect a practice noted by prior whistleblowers: the repeated public misrepresentation of harm that users experience.
Repeated examples of harm enabled by Meta and other companies have become publicized over the past few years through whistleblowing:
- Since 2017, whistleblowers have filed detailed SEC disclosures outlining a systemic failure at Facebook in establishing internal controls that would restrict and remove toxic and illicit content including illicit drug sales, human trafficking, and terrorist financing.
- In 2018, an anonymous whistleblower came forward that that Facebook is facilitating and profiting from illegal wildlife trafficking on its social media platform.
- In 2019, an anonymous whistleblower filed a disclosure with the U.S. Securities and Exchange Commission (SEC) detailing the ways that Facebook was auto-generating pages for terrorist groups despite claims that the site could block 99% of terrorist content before it would be spotted by users.
Sophie Zhang, former Meta data scientist on the Facebook Site Integrity fake engagement team, detailed blatant attempts by foreign national governments to abuse the platform to mislead their own citizenry on multiple occasions in a leaked exit memo in 2020. - In 2021, Frances Haugen filed multiple whistleblower disclosures to the SEC, based upon the legal theory that Facebook is violating U.S. securities laws by misleading the public about its handling of criminal and illicit content on the site.
Whenever such reports emerge, Meta’s response is to talk about ‘prevalence’, and its investment in moderation and policy, as if that was the only relevant issue. But there is a material gap between their narrow definition of prevalence and the actual distressing experiences that are enabled by Meta’s products.
Managers, Zuckerberg included, have repeatedly ignored mechanisms of harm called to their attention, rather downplaying published findings and the results of internal research. They have a record of obfuscating the situation by quoting statistics that are irrelevant to the issues at hand.
Bejar argues that alongside disregarding the distress of users, the way Meta responds to these problems often makes them worse, normalizing harmful behavior and encouraging unwanted contact and content.
Social media companies are not going to start addressing the harm they cause on their own. They must be pressured and compelled by regulators and policymakers to be transparent about these harms and how they are addressing them.
To catalyze action, Bejar recommends that public earnings calls should include a report on the percentage of teenagers who experienced unwanted sexual advances in that quarter, as having to report their high numbers would expose their latent accountability measures.
Having to publicize the information themselves would ideally galvanize the platform into introducing features that enable users to better address those harms.
“My goal in all this, as a father and as an engineer who has worked on these problems for many years, is to help regulators, policymakers, academics, journalists, and the public better understand how companies think about these problems and what it would take to address them. I also want to create a meaningful increase in support for integrity and trust and safety workers at the companies.”
Bejar sees his recommendations as a necessary start to the work that is needed to create a safer, more accountable online environment.
“I believe the only way this can happen is through regulation imposed from the outside. Meta has consistently demonstrated that it will not address these issues on its own.”