Freedom of expression is a universal human right. But it is not an absolute right, and there are common limitations with respect to slander, libel, and incitement of violence. Freedom of expression therefore also comes with a moral obligation to wield this right with dignity and civility that protects others from harm.
‘Universities have no business policing or impeding free speech.’
Yet, free speech is an important right to protect from incursions. Universities have struggled with this lately, and some places have been led down the path of limiting free speech in the name of protecting diversity and inclusiveness. My own university, UBC, has worked on developing a freedom of expression statement that quickly came under fire and was shelved for the time being. The November 16, 2017 Globe and Mail editorial mused: why are we killing critical thinking on campus? following an incident at Wilfrid Laurier University where a grad student was censored by administrators for discussing controversial ideas in class. Universities especially must be the space to develop new ideas and challenge old ideas. This means that universities have no business policing or impeding free speech, or protecting one group or another from speech that they may object to for whatever reason. Free speech can be uncomfortable, disconcerting, and even upsetting. The only absolute limit on free speech is preventing the incitement of violence or hatred, and the statutory limitations on libel and harassment. Universities need to be a place for rigorous academic debate, the free exchange of ideas, robust and uninhibited dialogue, and even (plenty of) disagreement. Another Globe and Mail editorial argued why an unabashed embrace of free speech is the best option for our universities. I agree. It is time for some university administrators to put ideology aside and remember the core mission of a university: to further knowledge. Disagreement, uncomfortable as it may be at times, is a central element of advancing knowledge.
‘In a free society, anonymity is the adversary of civility.’
Standing up for free speech does not mean that we have to ditch civility and mutual respect. Civility in dialogue and respectful disagreement are essential elements in a democracy, as we need to reach a level of mutual trust and cooperation to further the common good in society. Civility is a necessary element to achieve compromises, without which free societies cannot hold together and start to fray. So the question then becomes: how can we foster greater civility in society without impeding free speech? A significant part of the answer is: identifying the speaker, and making the speaker take ownership of his or her speech. In a free society, anonymity is the adversary of civility. I will pick one example where civility has gone missing: newspaper forums. (Another top contender: social media.)
As newspapers have increasingly gone online, they have added forums for people to leave comments. Increasingly, public discourse on these forums has taken on a tone of hostility, negativity, and incivility. Comments are often found to be angry, vitriolic, disrespectful, and even vulgar or hateful. Some comments are openly racist or sexist. While such expressions of opinions may not be illegal in most instances, these comments are often profoundly hurtful to the author or other commentators. It is time that civility returned to public discourse in these places. A while ago the public editor of the Globe and Mail asked the question whether the newspaper should "fix or ban online comments"? On one hand, there are ways to encourage a more civil dialogue on newspapers forums. On the other hand, some outlets such as Popular Science magazine have banned online comments altogether as this could tarnish the magazine's scientific credibility.
Academic research has looked at one possible culprit for the increasing lack of civility on newspaper comment boards: anonymity. Hiding in anonymity, people express views without fear of any repercussions. That is both good and bad. On the upside, anonymity serves to protect individuals from retaliation and their ideas from suppression—in particular in societies that are intolerant or undemocratic. On the downside, anonymity protects uncivil speech.
A study by Santana (2013) found that "anonymous commenters were significantly more likely to register their opinion with an uncivil comment than non-anonymous commenters." Specifically, "non-anonymous commenters were nearly three times as likely to remain civil in their comments as those who were anonymous." A more recent study by Graf et al. (2017) assesses the effect of incivility and anonymity on the perceptions of the comments and the commenters. Their results suggest that civility influences trust. They conclude that "because online news readers trusted civil comments more than uncivil ones, online news media publications may be worrying too much about the potential impact of uncivil online comments on the larger public discourse or on the dissemination of information." In other words, commenters who posted civil comments were trusted, while those who posted uncivil comments were not. So should newspapers stop worrying then?
I cherish free speech in a liberal democracy, yet I condemn incivility and hatred. How can we preserve the right to freedom of expression but encourage civility and politeness at the same time? Newspaper online boards, and more generally social media, have the ability to mange their realm. They can do so more effectively. The New York Times has found a good way of doing so. Posts there can be anonymous or not. The NYT gives readers the option to view "NYT Picks", "Reader's Picks", and "All" (in chronological order with most recent posts first). Virtually always I start with the Newspaper's picks. They are usually showing a selection of comments that are "representative" of a variety of opinions (and those with a civil tone at that). With "Reader's Picks", readers can vote to recommend a post, and the posts with the highest number of recommendations show first. Ultimately, credibility is a matter of "peer review"—something the academic community knows well about. In ranking the quality (and implicitly, civility) of comments, newspapers can find a better way of weighting reader "picks". For that to work well, it is necessary to identify the quality of recommenders. I can imagine a multitude of ways to identify the quality of "peers" who recommend posts beyond the obvious "like" or "dislike".
While providing better feedback mechanisms is important, it is probably not the only necessary mechanisms to bring greater civility to social media. There are "media trolls" out there who impersonate and slander on purpose. Some people have started fighting back. Yair Rosenberg, a journalist who has been harrassed by such media trolls, joined forces with a software engineer to develop a bot that patrols Twitter for impersonations. Their "Impostor Buster" (discussed in Rosenberg's December 27, 2017 article Confessions of a Digital Nazi Hunter in the New York Times) is able to spot one particularly nasty type of social media abuse, but it was shut down by Twitter. Social media need to take greater responsibility for what happens on their platforms, and the problem of false identities (impersonation) is even worse than hiding behind anonymity. If the social media companies are unwilling to take appropriate action voluntarily, they can only blame themselves if they face tighter regulation. The European Union's 2008 Framework Decision on Combating Racism and Xenophobia will be applied increasingly to social media, and in May 2016 the European Commission and IT companies (Facebook, Microsoft, Twitter, and Youtube) jointly announced a new Code of Conduct on illegal online hate speech. As James Andrew Lewis from the Center for Strategic and International Studies puts it in his November 1, 2017 article European Union to Social Media: Regulate or Be Regulated, social media companies have a window of opportunity to reign in abuse. Lewis writes:
Artificial intelligence can provide automated editors, flagging stories and sources as doubtful or at least increasing transparency on sourcing for readers. These programs will at first make errors and flag or remove innocent content, but artificial intelligence programs can be "trained" and can learn from their mistakes and will quickly improve. Companies can create processes to review and remove dangerous content and to provide ways to dispute removal. The key is to bring mediation to the Internet, whether automatic or human.
Perhaps even the 45th U.S. president might get caught by such an AI bot when he spreads yet another falsehood. Perhaps affixing "The content of this message has been flagged as potentially inaccurate and the reader of this message is strongly advised to establish the veracity of its content independently." will help restore sanity. Artificial intelligence may not be a panacea for detecting misinformation and incivility, but it may help provide better real-time responses to misinformation and incivility until a mediation process can sort out fact from fiction, and separate legitimate but pugnacious speech from illegal hate speech or defamation.
In academia, we can also encourage civility. The way to do so is to show patience and restraint in listening, and to counter faulty arguments with logic and facts. I have confidence that ultimately (although not always in the short term) the better argument wins the day, and that science and reason triumphs over propaganda and disinformation. Let the better data and the better science carry the day.
Further readings:
- Sylvia Stead: Public Editor: Should the Globe fix or ban online comments?, The Globe and Mail, July 29, 2016.
- Joseph Graf, Joseph Erba, and Ren-Whei Harn: The Role of Civility and Anonymity on Perceptions of Online Comments, Mass Communication and Society 20(4), pp. 526-549, February 2017.
- Arthur D. Santana: Virtuous or Vitriolic: The effect of anonymity on civility in online newspaper reader comment boards, Journalism Practice 8, pp. 18-33, July 2013.