Has the internet broken the marketplace of ideas? Rethinking free speech in the Digital Age

Writer Cody Delistraty explores the limitations of free speech absolution in the era of social media for Document's Fall/Winter 2018 issue.

For nearly a century, America has maintained a unique tradition of distinguishing between words and acts: One can say almost whatever they like, and be as offensive as they please, so long as those words aren’t likely to lead to otherwise avoidable physical harm. In 1919, the Supreme Court ruled in Schenck v. United States that, unless words are “of such a nature and used in such circumstances as to create a clear and present danger”—as in falsely yelling “Fire!” in a public place—the First Amendment to the Constitution protects any kind of speech.

John Stuart Mill, the English philosopher and economist, outlined a similar theory of speech six decades before Justice Oliver Wendell Holmes Jr. wrote the opinion in Schenck. In his 1859 treatise, On Liberty, Mill gave the example of a newspaper publishing an opinion article claiming that “corn-dealers are starvers of the poor.” Whether corn dealers are actually starving poor people or not, Mill argued, such an article would have to be permitted because it is expressed within a context in which no immediate harm might be caused to corn dealers. It would not, however, be acceptable to place those same words on a poster or a placard next to a corn dealer’s house, where an angry mob is gathered. Doing so, Mill wrote, would constitute “a positive instigation to some mischievous act.”

In Abrams v. United States, a case similar to Schenck decided later in 1919, Holmes wrote that there is a “free trade in ideas” within the “competition of the market.” This theory, that there exists a “marketplace of ideas,” is the speech equivalent of capitalism: Just as the best products triumph in a free exchange, so, too, will the best ideas. Bad ideas, like shoddy products, ultimately won’t sell and will be crowded out by good ones.

This line of thinking remains popular today. When Twitter co-founder and CEO Jack Dorsey tweeted in August that the platform would not ban conspiracy theorist Alex Jones, because Dorsey would rather have “journalists document, validate, and refute such information directly so people can form their own opinions,” he was tapping into the marketplace theory. As Justice Louis Brandeis famously wrote in his concurring opinion in the 1927 Supreme Court case Whitney v. California, “the remedy to be applied to [falsehood and fallacies] is more speech, not enforced silence.”

The marketplace theory is a particularly American construction. Along with its capitalist undertones, it also squares with Americans’ historical anti-authoritarian, limited-government inclinations. It is better, the thinking goes, to allow a few words and ideas to offend people than to risk fascist limits on freedom of speech, since unworthy words and ideas will inevitably fail to gain traction.

Given how closely the marketplace theory is linked to the First Amendment, it has become essentially immovable, an American stalwart of social philosophy. Indeed, a 38-nation Pew Research Center survey in 2015 found that Americans are the most supportive citizens of any nation in the world of both free speech and the right to use the internet without censorship. The study further found that Americans are also among the most tolerant of offensive speech, including supporting the right of others to say offensive things about their religious beliefs or minority groups, or to say things that are generally considered socially inappropriate (e.g., sexually explicit speech).

“For now, the social media companies are struggling to stake out a position on what constitutes grounds for removal of content that their users will find acceptable, knowing that whatever they decide will alienate some.”

“Americans don’t necessarily like offensive speech more than others,” wrote Richard Wike, Pew’s director of global attitudes research, in a summary of the study, “but they are much less inclined to outlaw it.”

The picture of unconditional support for free speech among Americans is more complicated than it appears, however. A longitudinal study using General Social Survey data from 1972 to 2016 found that American liberals who consider themselves “slightly liberal” are increasingly against protecting racist speech. Crucially, this change has come mostly from the political center.

“Those who identify as ‘extremely liberal,’ have always been, on average, the most supportive of free speech (even for racist speakers),” wrote Justin Murphy, an assistant professor of politics and international relations at the University of Southampton, who analyzed the data. “Historical phenomena such as the left-wing Berkeley Free Speech movement of the 1960s has not been reversed by contemporary SJWs [Social Justice Warriors]; extreme liberals carry on that tendency” of supporting free speech. “The inference here,” Murphy continues, “is simply that SJWs are actually not extreme liberals.”

The shift in the middle—of these “slightly liberal” social justice warriors—is an important one. Today, young Americans in particular seem to be increasingly against protecting hateful speech that targets minorities and those not in positions of power. For a 2017 Brookings Institution study, 1,500 American college students across 49 states and the District of Columbia were asked, “Does the First Amendment protect ‘hate speech’?” A plurality, 44 percent, responded “no.” (Thirty-nine percent said “yes” and 16 percent said “don’t know.”)

So are the tables turning? Has support for unconditional free speech begun to recede in America? And is it time to re-examine the marketplace theory?

The American approach to free speech is, of course, not the only model in use around the world.

Since World War II, many European nations have restricted speech in some form or another. Nazi symbols and paraphernalia, Holocaust denialism, and possession of Adolf Hitler’s memoir, Mein Kampf, are variously illegal in Germany, Austria, Israel, and the Netherlands, among other countries. The question—and one that Americans have long ago identified—is where do such limits to free speech end? Autocracy thrives when free speech is curtailed. In Poland, earlier this year, President Andrzej Duda made it illegal to discuss the country’s complicity in World War II. Saying phrases like “Polish death camps” (which, though misleading, is technically correct given that some Nazi-run camps, including Auschwitz and Treblinka, were located in Poland) can now result in a fine or up to three years in prison.

Models: Antoine Lorvo at Soul Artist Management and Jessica Furhmann at Heroes. Photo Assistant: Tom Rauner.

In France, certain types of hate speech are banned. Actress Brigitte Bardot has been fined five times for inciting racial hatred—most recently in 2008, when she criticized an Islamic tradition of slaughtering sheep during the Eid al-Adha holiday and denounced immigration from Muslim countries. And in Canada, in 1990, the  Supreme Court upheld the conviction of James Keegstra, a schoolteacher who had told his students that Jews are “money loving,” “power hungry,” and “treacherous.” In that case—R. v. Keegstra—the court ruled that Keegstra was “unlawfully promoting hatred against an identifiable group by communicating anti-Semitic statements.” In the court’s decision, Chief Justice Brian Dickson distinguished between American and Canadian approaches to free speech. “There is much to be learned from First Amendment jurisprudence with regard to freedom of expression and hate propaganda,” he wrote. But “if values fundamental to the Canadian conception of a free and democratic society suggest an approach that denies hate propaganda the highest degree of constitutional protection, it is this approach which must be employed” in interpreting Canadian laws.

Perhaps there’s something to this notion of legitimate competing interests. “It is not clear to me that the Europeans are mistaken,” wrote legal philosopher Jeremy Waldron in The New York Review of Books, “when they say that a liberal democracy must take affirmative responsibility for protecting the atmosphere of mutual respect against certain forms of vicious attack.”

“In the American debate, the philosophical arguments about hate speech are knee-jerk, impulsive, and thoughtless,” Waldron writes in his 2012 book, The Harm in Hate Speech. He notes that “harm” comes from social hierarchies that oppress certain groups of people. In an ideal liberal democracy, everyone would be treated with dignity and assured of equal social status. Hate speech, he argues, chips away at both of these, and harm is effected when certain groups are “denounced or bestialized in pamphlets, billboards, talk radio, and blogs.”

“Can their lives be led, can their children be brought up, can their hopes be maintained and their worst fears dispelled in a social environment polluted by those materials?” he asks.

Even the president of the American Civil Liberties Union—which has made the protection of free speech a “bedrock” of its mission since its founding in 1920— has questioned the traditional American approach to free speech. “We need to consider whether some of our timeworn maxims—the antidote to bad speech is more speech, the marketplace of ideas will result in the best arguments winning out—still ring true in an era when white supremacists have a friend in the White House,” said Susan N. Herman, at one of the group’s national conferences last year.

We need to consider, too, what happens when the marketplace breaks down—when the “best” ideas no longer crowd out the unworthy ones.

In 2018, when more than a billion social media posts are published each day, the spread of ideas has effectively become free of cost. And there is no longer a generally accepted threshold of education or professional experience required to publish one’s opinions, as there was when news came mostly from mainstream newspapers and television programs.

Criticism of how social media companies in particular handle demonstrably false ideas has increased since the disclosure that these platforms were used to disseminate “fake news” leading up to the 2016 presidential election. Since last year, representatives of Facebook, Twitter, and YouTube have repeatedly been called to testify in congressional hearings about their efforts to combat misleading content and hate speech.

“Today, young Americans in particular seem to be increasingly against protecting hateful speech that targets minorities and those not in positions of power. Are the tables turning?”

In arguing that Facebook is a “platform” rather than a “publisher,” co-founder and CEO Mark Zuckerberg has endorsed the marketplace of ideas theory. “The principles that we have on what we remove from the service are, if it’s going to result in real harm, real physical harm, or if you’re attacking individuals, then that content shouldn’t be on the platform,” he told Recode’s Kara Swisher in July, in an interview in which he also stated that he would not support removal of Holocaust denial-related content from Facebook. “At the end of the day, I don’t believe that our platform should take [false content] down, because I think there are things that different people get wrong.”

Many believe that Zuckerberg and his counterparts at other social media companies are loath to be perceived as taking sides in a culture war. The cost to these companies of monitoring content more actively is significant. Last year, Facebook pledged to increase the number of its content moderators to 20,000 by the end of 2018, a figure that would increase its headcount by nearly 50 percent. But that sum pales in comparison to the revenue the company could stand to lose if its conservative users were to abandon it for a rival social network. In an August interview with Axios, Donald Trump Jr. claimed that he would “love” to see a conservative social network launch before the 2020 presidential election: “I’d help promote the platform and be all over that.”

For now, the social media companies are struggling to stake out a position on what constitutes grounds for removal of content that their users will find acceptable, knowing that whatever they decide will alienate some. In August, Apple, Facebook, YouTube, and Spotify removed content from Infowars and its founder, Alex Jones, from their platforms. And despite Dorsey’s August statements, Twitter ultimately banned Infowars and Jones, as well, in September. Facebook and Twitter were careful to explain that their decisions were based not on Jones’s conspiracy mongering or political positions, but rather on violations of their hate speech and abusive behavior policies.

“While much of the discussion around Infowars has been related to false news,” Facebook stated in an August 6 note on its site, “none of the violations that spurred today’s removals were related to this.”

Some didn’t find Facebook’s brand of explanation persuasive. “Social Media is totally discriminating against Republican/Conservative voices,” President Donald Trump tweeted on August 18. “Too many voices are being destroyed, some good & some bad, and that cannot be allowed to happen. Who is making the choices, because I can already tell you that too many mistakes are being made. Let everybody participate, good & bad, and we will all just have to figure it out!”

This exhortation—“we will all just have to figure it out”—is a weaker formulation of the marketplace theory: Collectively, Trump implies, we can achieve consensus. But what if “figuring it out” means learning to coexist in a society in which bad ideas aren’t crowded out by good ones but instead persist alongside them in perpetuity?

Since the internet has removed physical barriers to the spread of ideas, people can seek out those whose ideas confirm their own, whether mainstream or fringe, interacting and organizing without even needing to get out of bed. Likewise, search engines and social media companies offer them tools to filter out any ideas and perspectives that challenge their own, as Eli Pariser argued in his 2011 book, The Filter Bubble, such that the idea of a competition between ideas no longer applies. What remains is not a marketplace, but rather a new kind of forum altogether.

The effects of continuing to rely on the marketplace theory are not merely hypothetical. When certain bad ideas are allowed to persist, such as racial and ethnic discrimination, it takes a long-term toll on the mental health of those discriminated against. “Racism and racial discrimination create a unique environment of pervasive, additional stress for people of racial and ethnic minorities in the United States,” the American Psychiatric Association wrote last year on its website. “These repeated traumatic interactions can result in reduced self-esteem and internalized hatred as they’re forced into conservative and apologetic thinking.”

If hate speech represents a “clear and present danger” to mental health; and if the marketplace has broken down and the internet has allowed the “bad idea” of hate speech to flourish; and if we allow that foundational American values can be in conflict with one another, as in the case of free speech and equality—is it time for us to reconsider whether and how it should be protected by the First Amendment?

It’s possible the college students and young social justice warriors, who grew up in the age of social media and search engines, have simply perceived a pivotal shift before the rest of us: that the internet has fundamentally challenged the marketplace theory, and that we can’t rely on competition alone to resolve bad ideas. And it’s possible that now is the moment when Americans need new ideas to challenge the marketplace theory—and ultimately to crowd it out.

Tags