Two weeks after a whistleblower filed an updated federal complaint accusing the network of promoting terrorism, Facebook continues to deal with pressure about questionable content. The details of the complaint to the Securities and Exchange Commission were outlined in an Associated Press story.
- On Thursday, Facebook issued a “Community Standards Enforcement Report” that concluded “terrorist propaganda” accounted for .03 percent of aof the site’s views. The network finds 99.8 percent of all terrorist material before being alerted by users.
- On Wednesday, Nathaniel Gleicher, Facebook’s head of cybersecurity policy, spoke on the election issue before the House Committee on Government Oversight and Reform. He told the committee that the company has 30,000 people “working on safety and security across the company, three times as many as we had in 2017,” according to his written version of his testimony.
Facebook’s has said that much of its antiterrorism efforts rely on artificial intelligence (AI). Several Facebook executives painted a less positive picture of the company’s content moderation effort.
- On Monday,The Verge website quoted Facebook’s top artificial intelligence (AI) scientist saying the company is “years away from being able to fully shoulder the burden of moderation, particularly when it comes to screening live video.”
- On Friday, in a New York Times profile, Mike Schroepfer, Facebook’s chief technology officer, admitted that AI was not going to solve the problem completely.
In two of the interviews, he started with an optimistic message that A.I. could be the solution, before becoming emotional. At one point, he said coming to work had sometimes become a struggle. Each time, he choked up when discussing the scale of the issues that Facebook was confronting and his responsibilities in changing them. “It’s never going to go to zero,” he said of the problematic posts.
The story talks about a Facebook meeting where images of broccoli and marijuana were shows side by side
The problem was that the marijuana-versus-broccoli exercise was not just a sign of progress, but also of the limits that Facebook was hitting. Mr. Schroepfer’s team has built A.I systems that the company now uses to identify and remove pot images, nudity and terrorist-related content. But the systems are not catching all of those pictures, as there is always unexpected content, which means millions of nude, marijuana-related and terrorist-related posts continue reaching the eyes of Facebook users.
- But, can it tell the difference between ISIS toilet paper and ISIS propaganda? The question came up last week at the inaugural meeting of a group called the Global Research Network on Terrorism and Technology. On a videocast of the session, Erin Saltman, who handles counter-Terrorism efforts for Facebook, said:
Everyone uses the Internet of Things to do things better faster easier cheaper and that includes in the unfortunate cases where people are strategizing real-world harm and that is something that as tech companies we have to face.
In her academic research into the process of radicalization, Saltman saw the growing role of the internet.
While we can’t blame the Internet entirely as this violence and terrorism predates the internet, we can see that there is a catalyst role.
- In the meantime, The Washington Post reports that the U.S. declined to endorse an international effort designed to curb extremism online. White House officials said free-speech concerns prevented them from joining the campaign, which emerged in response to live-streams of shootings at two New Zealand mosques.
- And, the auto-generated Facebook page identified in the AP report and whistleblower complaint remains online. As of this morning, more than 4,400 users like the page for the Syrian terrorist group Hay’at Tahrir Al Sham.
Take Action! Urge the SEC to investigate and hold Facebook accountable!
Resources