The difficulty of moderating the ocean of content that gets posted on social networks by billions of users every day was obvious even before former President Donald Trump’s trolling forced Facebook and other platforms to block his accounts earlier this year. Differentiating genuine harassment or abuse from friendly banter, identifying harmful images and videos from among the tens of millions uploaded every day, and distinguishing between authentic political messages and professional trolling operations is hard enough just for English-speaking audiences in North America; these challenges are compounded when different languages and cultural norms are involved. What sounds like innocuous phrasing when translated into English could be dangerous hate speech in another language or culture, and automated systems—and even human moderators—are often not good at making those distinctions.
There are political challenges as well. Authoritarian regimes have become expert in navigating the terms of service for the major platforms, and using them to flag content they don’t agree with; some countries have used problematic content such as “fake news” as an excuse to legislate the truth. How are the digital platforms handling these challenges? And what are the potential downsides of their failure to do so? To answer these and related questions, we convened a virtual panel discussion using CJR’s Galley platform, with a group of experts in content moderation and internet governance and policy around the world.
The group included Jillian York, the director of international freedom of expression for the Electronic Frontier Foundation; Michael Karanicolas, executive director of the Institute for Technology, Law, and Policy at the University of California, Los Angeles (UCLA); Emma Llansó, director of the Free Expression Project at the Center for Democracy and Technology; Jenny Domino, who leads the Internet Freedom Initiative for the Asia-Pacific region at the International Commission of Jurists; Sarah Roberts, an associate professor of information studies at UCLA; Rasha Abdulla, a professor of journalism at The American University in Cairo; Agustina Del Campo, director of the Center for Studies on Freedom of Expression and Access to Information at the University of Palermo in Argentina; and Tomiwa Ilori, a researcher at the Centre for Human Rights at the University of Pretoria in South Africa.
York said that in the past she might have drawn a brighter line between the behavior of democratic and non-democratic nations when it comes to freedom of expression on digital platforms, but not now. “Frankly, over time, a wide range of countries have demonstrated that they’re more interested in putting on a show and stifling dissent than they are in finding real solutions that serve people,” she said. In Mauritius, York noted, the government has put forward a law that would create an official record of everything posted to social-media platforms in order to stop disinformation and harassment, because Facebook doesn’t offer enough moderation in the country’s native language. Even Punjabi, which has more than 100 million native speakers, isn’t offered as an official language on Facebook, York said, which makes moderating content even more difficult.
When it comes to takedowns, Llansó said a number of countries have become quite adept at getting the content they don’t like removed by complaining about it breaching the terms of service at a platform like Facebook. “Many governments are getting much wiser to the fact that if they can get a social media site to agree that content violates their TOS (whether it’s illegal under the country’s laws or not), that content will come down worldwide,” she said; such a move, she added, is much faster than going to court to get an injunction. When it comes to potential solutions, Facebook often argues that automated moderation and algorithms will help, but language is a problem there as well, according to Llansó. “A lot of the use of automation, especially for language-processing, is still focused on high-resource languages, like English, where there is a lot of content available to study and evaluate,” she said.
There has always been a tension between the transnational nature of online speech and the fundamentally local way in which speech has always been regulated, said Karanicolas. But now, the fact that authority over these decisions is “being delegated to private sector entities, who in many cases have only a threadbare understanding of the local cultural and expressive contexts across many of the markets where they operate, makes things vastly more difficult,” he said. “There are huge gaps in accountability, transparency, and due process which need to be addressed.” And while violence in Sri Lanka or Myanmar grabs headlines, Karanicolas said, “these are just particularly severe manifestations of a broader problem, where US-based companies are fundamentally disconnected from the contexts that they operate in.” CJR’s discussion series continues all this week.
Here’s more on the platforms and moderation:
- Amplification: Some argue that platforms such as Facebook should be required to moderate not just the content uploaded but the amplification of that content via the company’s algorithms. This is more complicated than it sounds, Daphne Keller, director of the Program on Platform Regulation at Stanford’s Cyber Policy Center, writes in a piece for the Knight First Amendment Institute at Columbia University. “Some versions of amplification law would be flatly unconstitutional in the U.S., and face serious hurdles based on human or fundamental rights law in other countries,” Keller says. “Others might have a narrow path to constitutionality, but would require a lot more work than anyone has put into them so far.”
- Test subjects: Countries where democracy is most fragile are test subjects for the content-moderation policies, Karanicolas wrote in a piece for Slate in 2020 as part of a collaboration between the magazine and the Technology, Law, & Security Program at American University. This is “obviously problematic,” he wrote. “When things go wrong there, the results can be an order of magnitude worse than anything that America is likely to experience, as the violent dismantling of democratic structures in the Philippines and Brazil illustrate.”
- Repression: When it comes to navigating the terms of service offered by digital platforms in order to get content removed, Israel is one of the countries at the head of the pack—it uses such complaints against content related to Palestine on an almost daily basis, according to a number of advocacy groups. “I’ve been writing about this topic for a long time, and I have not seen anything of this scale,” said Marwa Fatafta, of the human-rights advocacy group Access Now, referring to how often Palestinian content and accounts are removed from Facebook and other digital platforms. “It’s so brazen and so incredible, it’s beyond censorship—it’s digital repression.”
Other notable stories:
- CNN reports that the Trump administration fought for six months to obtain the email records of one of the network’s reporters, and insisted the whole process be protected by an extraordinary level of secrecy, according to a report Wednesday from CNN’s lead attorney.The attempt began in 2020, under then-Attorney General William Barr, with a demand for two months of email logs from Barbara Starr, CNN’s Pentagon correspondent. The New York Times has asked a court to unseal filings made by the Justice Department in a similar case involving four of its reporters, and Attorney General Merrick Garland has scheduled a meeting with the Times, the Washington Post, and CNN to discuss these kinds of investigations.
- The Department of Justice recently said it will no longer try to compel media outlets to reveal their sources during leak investigations. For CJR, Anna Diakun and Trevor Timm detail a few outstanding questions for the department’s relationship with the media. “As the Times noted, the DOJ’s statement appears to leave some ‘wiggle room’ surrounding the circumstances in which the policy applies, limiting it to when journalists are ‘doing their jobs.’ What exactly does this mean?”
- Updates that Apple is making to the privacy policies for its iOS devices could cause problems for newsletter writers, according to NiemanLab. “One of the few bright spots in the news business in recent years has been this little boomlet in newsletters,” writes Josh Benton. “Newsletter advertising is hardly the data-hoarding beast a Facebook ad is, but it does rely heavily on one little tracker: the tiny tracking pixels embedded in many emails that tell the sender whether their email has been opened.” That in turn is an important part of how newsletter publishers sell ads, Benton says.
- A new report from the Coalition for Women in Journalism looked at threats and cases of violence against female reporters between January and April of this year, and documented 348 such cases during that period—an increase of more than 130 percent from the same period in 2020. Seven female journalists were killed in the first quarter of this year, in Afghanistan, Algeria, Cameroon, and the United States. Turkey, Myanmar, and the United States were the countries with the highest number of cases of violence, the group said.
- Lynsey Chutel, Lauren Harris, Linda Kinstler, Tony Lin, Zainab Sultan, and Stephania Taladrid won a Mirror Award on Wednesday for the best story on media coverage of the COVID-19 pandemic, which appeared in CJR in September 2020. Lauren Markham won a Mirror Award for her profile of Lizzie Johnson, a fire reporter for the San Francisco Chronicle, which appeared in CJR in March 2020. The full list of winners is here.
- The Committee to Protect Journalists said Wednesday that executive director Joel Simon plans to step down by the end of the year, after almost a quarter-century at the organization, including 15 years in his current role. Board Chair Kathleen Carroll will lead a committee to identify a successor to Simon, who said he will assist with the transition. CPJ has hired Spencer Stuart, a global executive search advisory firm, to help with its search. Simon, 56, joined CPJ as Americas program coordinator in 1997 and became deputy director in 1999. He was appointed executive director in 2006.
- Guardian Media Group chief executive Annette Thomas has quit the publishing company after an internal battle with Katharine Viner, editor of the Guardian, according to the Financial Times. “Thomas, who only joined in March last year, will leave the company at the end of the month after clashing with Viner over control of the group and its strategy,” the paper reported. “Her decision comes as the Guardian reviews its unique structure, which gives the newspaper’s editor a high degree of independence by making her accountable to the board of the Scott Trust, the owner of GMG,” the Times said.
- Los Angeles magazine profiled Yashar Ali, a contributor to New York magazine and HuffPost whose Twitter account and Substack newsletter have become journalistic outlets in their own right. “Part investigative journalist, part gossip columnist, and part trusted confidante, Ali is a uniquely twenty-first-century media personality,” the profile said. His tweets have helped “topple not one but two Fox News anchors (Kimberly Guilfoyle and Eric Bolling) and his Twitter bombshells during the Mueller investigation made even Jared Kushner sweat.” But the magazine adds that Ali’s past behavior has raised questions, and led to at least one lawsuit from a San Francisco power broker.
Mathew Ingram is CJR’s chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.