There is nothing new about the censorship that limits freedom of expression and access to social media in the governments of China, Bangladesh, Iran, North Korea or Vietnam. But today we are witnessing another type of censorship in which social media and search engines companies are controlling and filtering information in the public sphere.
Since nearly four-in-ten US adults said that they often get news from online, including social media (18%), according to “The State of the News Media 2016” report of The Pew Research Center, more and more news media companies rely on social media to distribute information. Thus, news organisations are increasingly publishing directly to social media, using native platforms like Facebook’s Instant Articles.
As a result, news companies and social media depend on each other in an “increasingly symbiotic relationship — each looking to the other to boost traffic and business”, explains Chava Gourarie, in an article for the Columbia Journalism Review. What seems to be the problem is the moderation of news and information content made by ‘purely’ technology companies. In this matter, issues of accountability are now being raised, with no historical precedents, asking for more transparency and change of policies, as public speech is increasingly taking place in social media platforms.
The year of 2016 was particularly controversial on content moderation, especially for Facebook and Twitter. As Olivia Solon, wrote for The Guardian, this was the year when “Facebook became the bad guy”. For instance, on January, Facebook censored a photo of the Copenhagen’s famous statue of “The Little Mermaid”, published for the Danish television TV2. Later on August, Twitter suspended 235.000 accounts that allegedly promoted extremism. But the last straw happened when Facebook censored the “Terror of War”, a Pulitzer Prize photo winner picturing a naked 9-year-old girl fleeing napalm bombs during the Vietnam War, removed under the accusation of ‘child pornography’. As a result, the editor-in-chief of the Norwegian newspaper Aftenposten, Espen Egil Hansen, called Mark Zuckerberg “the world’s most powerful editor”.
In the recently released report “Censorship in Context” of the Online Censorship organisation, Facebook was, by far, the most covered platform with 74% of the total complains. Since 2015, the organisation received 208 articles covering censorship, most of them, concerning nudity or sexually explicit content. But also hate speech and police brutality has been reported and communities discussed included mostly women, artists, LGBTQI, African Americans, Muslims, American conservatives, refugees and journalists, among others. In the words of the complainants, the experience of being censored on social media was described as being “Kafkaesque”, meaning a surrealistic and bizarre situation, in the style of Franz Kafka.
According to its website, the Online Censorship “seeks to encourage companies to operate with greater transparency and accountability toward their users as they make decisions that regulate speech”. Thus it brings the need to research on why certain contents are being took down, affecting free speech among communities of users and especially news media companies around the world.
From censorship to fake news
Recently, with the US Presidential elections that resulted on the victory of Donald Trump, Facebook increased the public attention for the allegedly spread of fake news that was pointed out by many to have influenced the final results.
In response to the accusation, Mark Zuckerberg wrote in his Facebook page that it was “extremely unlikely hoaxes changed the outcome of this election in one direction or the other”. In another post, the CEO specified that the company will take serious responsibility to combat misinformation by a series of mechanisms, such as, stronger detection, easy reporting, third party verification, warnings, related articles quality, disrupting fake news economics and listening to the news industry. Despite the promising measures, Zuckerberg still won’t admit the company’s relevant role on public opinion and news delivering by declaring Facebook as a tech company that do not intend to be “arbiters of truth”.
While Zuckerberg argue that “of all the content on Facebook, more than 99% of what people see is authentic”, a recent study conducted for BuzzFeed concluded that 75% of American adults who were familiar with a fake news headline, particularly those who cite Facebook as a major source of news, viewed the story as accurate.
In an article for The New York Times, Zeynep Tufekci highlights that “only Facebook has the data that can exactly reveal how fake news, hoaxes and misinformation spread, how much there is of it, who creates and who reads it, and how much influence it may have”. This situation marks an important shift of power from government to private corporations, particularly in the technology field, calling into question the means by which free speech is debated and protected. Those are the configurations of a “black box society”, as Frank Pasquale names it. Nevertheless, it is time for ”citizens to demand that important decisions about our financial and communication infrastructures be made intelligible, soon, to independent reviewers and that, over the years and the decades to come, they be made part of a public record available to us all”, concludes Pasquale in the book released in 2015.
Changes in policies
Governments, media corporations and other institutions around the world are still taking its first steps to deal with IT companies regarding content moderation and data access. While some institutions intend to benefit from the data provided by those corporations, particularly to fight against online terrorism and extremism, criticisms are being raised asking for more transparency on those agreements and changes in policies.
In January of 2016, some of the tech companies like Google, Facebook, and Apple met with the US security officials at the White House to discuss ways to fight terrorism online. However for the director of the Freedom of the Press Foundation, Trevor Timm, “if Congress passed a law trying to outlaw some of the content that the US government wants tech companies to delete and censor, it would be struck down as unconstitutional”, infringing freedom of expression, said to the Columbia Journalism Review.
On May 31st of 2016, Facebook, Microsoft, Twitter and YouTube signed an agreement with the European Commission to establish a code of conduct ”in order to prevent the spread of illegal hate speech”, as mentioned in the document. Still, the agreement tells nothing about the guidelines IT companies will censor ‘suspicious’ contents. For that reason, the Vice President for Standards of the Associated Press, John Daniszewski, recommends journalists to be precise when writing about “alt-right” – a white nationalist movement -, to avoid misunderstandings especially when those contents can be either removed or wrongly spread across social media.
The debate regarding accountability of social media platforms as technology – and not as media companies -, arise questions on how the IT companies can guarantee what is or not hate speech and what contents should or not be published in the public sphere. As Kalev Leetaru points it, in an article of Forbes: “It is one thing for a platform to announce it will delete posts that promote terrorism or that threaten another user with bodily harm, but to silently and systematically filter what users see through a distinct partisan lens, especially with regards to news reporting, adds a frightening dimension to just how much power a handful of Silicon Valley companies now wield over what we see online”.
Despite of the efforts, the European code of conduct has been criticised for a number of reasons. The European Digital Rights (EDRi) states that the code “downgrades the law to a second-class status, behind the “leading role” of private companies that are being asked to arbitrarily implement their terms of service”. For the EDRi, the agreement “exploits unclear liability rules for companies” creating “serious risks for freedom of expression as legal but controversial content may well be deleted as a result of this voluntary and unaccountable take down mechanism”.
Also the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression of the United Nations, David Kaye, issued a set of recommendations for states and companies about freedom of expression in the digital age, including the idea that “any demands, requests and other measures to take down digital content or access customer information must be based on validly enacted law, subject to external and independent oversight (…), and that “in the context of regulating the private sector, State laws and policies must be transparently adopted and implemented”.
Finally, the Online Censorship specifies in its report a set of best practices to provide concrete mechanisms in order to increase accountability and transparency and improve user education. The organisation thus recommends companies to: practice transparency by expanding transparency reporting and making the content moderation process transparent; offer redress where users can respond; encourage best practices through user media education instead of bans or punishments; and implement responsible policymaking with publicly known guidelines and principles.
As unjustifiable news value contents are being filtered in the public sphere by algorithms or ‘content management’ teams the public know little or nothing about, issues regarding transparency, accountability, regulation and changes in policy and design on the part of social media platforms seem to be a part of a bigger problem. Governments, news media companies, institutions, universities and citizens are also being called to intervene in benefit of citizenship and democracy. If this situation remains will the public continue to witness a remake of Noelle-Neumann’s “Spiral of Silence”, with ‘passive-aggressive’ algorithms, in today’s online journalism?
Author: Ana Melro (Universidade do Minho)