Frank Pasquale
The public sphere has endured yet another structural transformation. Megafirms like Facebook and Google have largely automated the types of decisions made by managers at television networks, or editors at newspapers. Automated recommendations are often helpful, aiding audiences as they seek to sort out the blooming, buzzing confusion of topics online. But they are also destabilizing traditional media institutions and circuits of knowledge.
The US election featured deeply disturbing stories about manipulation of social media for political ends. Unreliable sources proliferated, particularly among right-wing echo chambers. Politically motivated, profit-seeking and simply reckless purveyors of untruths all prospered. A Macedonian teen churned out stories with no basis, tarring Hillary Clinton with an endless series of lies, in order to score quick profits. For profit-minded content generators, the only truth of Facebook is clicks and ad payments. Bence Kollanyi, Phil Howard, and Samuel Woolley estimated that tens of thousands of the tweets “written” during the second US presidential debate were spewed by bots. These bots serve multiple functions – they can promote fake news, and when enough of them retweet one another, they can occupy the top slots in response to tweets from candidates. They can also flood hashtags, making it very difficult for ad hoc publics to crystallize around an issue.
On Facebook, a metastatic array of fake content generators and hard-core partisan sites flooded news feeds with lies and propaganda. Facebook, as usual, disclaimed any responsibility for the spread of stories falsely claiming that the Pope had endorsed Donald Trump, or that Hillary Clinton is a satanist trying to protect a pedophilia ring that cooks and eats children (to give a pair of the hundreds of lies that swarmed the platform). But the Silicon Valley firm has several levels of responsibility.
Basic design choices mean that stories shared on Facebook (as well as presented by Google’s AMP) all look the same. Thus a story from the fabricated “Denver Guardian” can appear as authoritative as a Pulitzer Prize-winning New York Times investigation. More directly, Facebook profits from fake news—the more a story is shared (whatever its merits), the more ad revenue it brings in. Finally, and most disturbingly, we now know that Facebook directly helped the Trump campaign target its voter suppression efforts at African-Americans.
The academic response to this imbroglio is multifaceted. Some communication scholars have rightly criticized Facebook for its apparent indifference to the problem of fake or misleading viral content. Others have focused their ire on the mainstream media, claiming that it was the recklessness or lack of professional responsibility at right-wing news sources (and established media institutions like CNN and the New York Times) which accelerated the rise of authoritarian candidates like Trump.
In truth, there is no contradiction between a critique of the new media and deep disappointment in old media. Moreover, any enduring solution to the problem will require cooperation between journalists and coders. Facebook can no longer credibly describe itself as merely a platform for others’ content, especially when it is profiting from micro-targeted ads. It has to take editorial responsibility.
Many apologists for big tech firms claim that this type of responsibility is impossible (or unwise) for a firm like Facebook to take on. They argue that the volume of shared content is simply too high to be managed by any individual, or team of individuals. But this argument ignores the reality of continual algorithmic and manual manipulation of search results and newsfeeds at large technology companies. When copyright holders or purchasers of advertising make demands, executives and engineers listen. Now it’s time to respect other stakeholders, rather than profiting from irresponsibility.
There are powerful lessons in the current controversy over fake news. First, be wary of platforms’ convenient self-reification. Facebook may aspire to merely be a technology company. Those aspirations may express themselves as a petulant insistence that unsupervised, rather than supervised, machine learning is the ideal way to solve problems on the platform. But that “identity” is a constructed and convenient one, directly at odds with tech firms’ repeated invocation of free expression protections to shield their actions from governmental scrutiny.
Second, journalists should be more assertive about their own professional prerogatives and identity. In the aftermath of the fake news scandals, Tim O’Reilly asserted that decisions about the organization of newsfeeds and presentation of information in them were inherently algorithmic functions, to be supervised by the engineers at Facebook. Certainly the alpha geeks whom O’Reilly describes as his subject share that view: the human editors of trending topics at Facebook were low status, contract workers, who were unceremoniously dumped when a thinly sourced news story asserted that conservative content was being suppressed. Shortly thereafter, Facebook was swamped by the fake news which now is the topic of so much controversy. Partnering with fact checkers is only a small first step toward this type of responsibility. The real lesson here is that human editors at Facebook should be restored, should be given more authority, not less, and that their deliberations should be open to some forms of scrutiny and accountability, like those of other professionals.
Some communication scholars have resisted the idea of professionalization of online content creation, curation, and delivery, in the name of citizen journalism which would democratize the power of the press to anyone with a computer and an Internet connection. While a beautiful ideal in theory, in practice, a failure among the de facto sovereigns of the Internet to distinguish between stories on the real Guardian and the Denver Guardian is not simply a neutral decision to level the informational playing field. Rather, it predictably accelerates propaganda tactics honed by millions of dollars of investment in both data brokerages and shadowy quasi-state actors now investigated by the CIA as sources of bias, disinformation, and illegal influence in the election. Freedom for the pike is death for the minnows.
In the 1980s, the chair of the US Federal Communications Commission, Mark Fowler, dismissed the bulk of regulation of broadcasters as irrelevant, since he viewed the television as nothing more than “a toaster with pictures.” In the 2010s, for better or worse, vast conglomerates like Facebook and Google effectively take on the role of global communication regulators. Mark Zuckerberg’s repeated insistence that Facebook is nothing more than a technology company is a sad reprise of Fowler’s laissez-faire ideology. It is also deeply hypocritical, for the firm imposes all manner of rules and regulations on both users and advertisers when those norms generate profits for it.
The public sphere cannot be automated like an assembly line churning out toasters. As Will Oremus has explained, there are aspects of the journalistic endeavor that are inherently human; so, too, are editorial functions necessarily reflective of human values. To be sure, there will be deep and serious conflicts over the proper balance between commercial interests and the public interest in assigning prominence to different sources; in deciding how much transparency to give decisions made about such issues; and how much control individual users should have over their newsfeeds, and the granularity of that control. But these are matters of utmost importance to the future of democracy. They can no longer be swept under the rug by plutocrats more interested in stock returns and marginal advances in artificial intelligence than the basic democratic institutions and civil society that underwrite each.
Frank Pasquale is professor of law at the University of Maryland. His scholarship and public speaking translates complex law and policy into accessible writing and presentations. His research agenda focuses on challenges posed to information law by rapidly changing technology. He is presently researching a book on automation and the professions.