Skip to content

Trolls and Internet Hate Culture

   

When TIME magazine notices that online hatred and trolling are serious problems, you know they’ve hit the mainstream.

I don’t expect to break any new ground here given my past posts on this topic. However, I find it noteworthy that TIME magazine–one of the most milquetoast publications that could grace one’s coffee table–finally had a cover story about online trolling and hateful behavior. Joel Stein wrote it, who is as decent enough a person as any to have tackle it.

It did make me wonder, though, where things went so wrong. It’s something I’ve thought about many times over the years. I’ve been on the Internet in one form or another since the early 1990s, and have been actively participating in online discussions since 1996. It’s hard to believe I’ve been doing this for 20 years! But I shouldn’t let my early experiences color my perceptions too much. Online communities existed long before I came onto the scene. Dial-in bulletin board systems and Usenet newsgroups were facilitating online discussion before I was even born.

An obvious concern, even in a very young online community, is how to moderate it. How do you curb bad behavior? How do you set norms and etiquette? First, someone has to be in charge. Every BBS had an operator who could ultimately decide what did or didn’t appear on their BBS, which meant each one was run to serve the unique needs, tastes, and preferences of the operator and the users. When the two came into conflict, of course, the operator’s edict reigned supreme. On Usenet, a relatively small number of newsgroups were moderated–most were, essentially, the Wild West, open to anyone.

If one did not like how a given BBS or Usenet group was run, they always had the option (if they possessed the necessary hardware and technical savvy) of creating their own. This was viewed very much as a feature: everyone could have more or less the experience they wanted, and if it didn’t already exist, they had the tools to create it instead. Usenet users could also, rather than relying on moderation, use extensive filtering and sorting tools to curate what they saw (and perhaps more importantly, didn’t see).

This philosophy of freedom and democratization of discussion tools was seen as extremely important by the Internet’s early academic and scientific architects and users. The specifications for various online protocols are free and open for anyone to implement and use. The basic infrastructure of the Web–HTTP and HTML–were given away to the public to facilitate open development and collaboration. These ideas are very utopian and part of an overall philosophy of a “gift economy,” in which people work and share the products of that work, not for personal gain, but for the betterment of the overall community.

It was a nice idea. In it, however, was always a strain of free speech maximalism. In brief, this is the idea that any censorship whatsoever is morally repugnant–freedom of speech is treated as an absolute, inviolable right. Hate speech, aggressive language, rampant lying, blatant propaganda, threats and harassment–all are acceptable, at least to some people who subscribe to this mindset. By no means do I wish to suggest that they are monolithic. There are those who accept, if grudgingly, that there are necessarily a few limitations on speech. Direct threats against a person (“I’m going to come to your house and kill you!”) are generally considered beyond the pale in even the most unrestricted environments. Once one crosses over into areas like hate speech, bigoted propaganda, and attitudes that are inherently (if not obviously) hostile, hateful, or violent, things get a lot more dicey.

Companies like Twitter and Reddit were founded on the notion that online communities, much like the telephone system, are common carriers. Their position has been that they merely provide a platform rather than serve as intermediaries who are in any way responsible for content. Again, this is not a terrible philosophy in a world where people can be expected to, by and large, behave themselves. But in an environment of anonymous, disposable accounts and zero accountability, it’s a recipe for patterns of behavior in which people who try to play by the rules are victimized over and over and left with little or no recourse, while people with harmful intentions are shielded by lofty but naive ideals.

The fact is that online communities that eschew moderation almost always become cesspools of bigotry, hatred, and generalized hostility. Twitter and Reddit have, albeit belatedly, recently caught on to this problem. Perhaps most important to those companies, they’ve seen the potential harm it can inflict on their financial prospects. Money trumps principles–no surprise there. Platforms on which public figures refuse to participate for fear of being inundated with abuse are going to have a hard time being profitable. Suddenly, we’re not talking about common carriers anymore, but mediated software platforms where system operators have content controls.

A positive aspect to this reluctance, in my opinion, is that platform owners are thus reluctant to force their users to adhere to any particular viewpoint. Twitter and Reddit don’t care if you’re liberal or conservative, atheist or Christian, black or white, man or woman or otherwise. They just want you to use their platform, and have taken their sweet time coming to the realization that tolerating abusive behavior will only drive users away, leaving behind mostly angry reactionaries who want everything for free in the first place. The odds of these companies overreacting in their efforts to stem abuse seems low. Unfortunately, it also means they will tend to be slow to react–too slow to stop developing situations from spinning out of control and seeing more people victimized.

I won’t pretend it’s easy to run a large online platform and deal with abusive elements, who are often technically savvy, take measures to ensure their anonymity, and have virtually unlimited free time to engage in their trolling “hobby.” But this is a group whose behavior has, for better or worse, become normative on the Internet due to a clash between egalitarian idealism and those who would take advantage of it for evil purposes. Free speech only creates meaningful dialogue when all parties engage in good faith. When there are parties involved who are only present to sow discord, to abuse, to harass, to disrupt, to destroy, it’s no surprise that discourse frays, decays, and collapses.

Such elements can’t be tolerated or dismissed with a simple admonition to ignore them. They must be forcibly, even proactively, removed from the communities they poison. This will undoubtedly require manpower–it is not something that can easily be implemented through technical means, though I have no doubt that more and better technical tools are possible for dealing with these problems. We must also come to accept that this situation took a long time to develop, and its origins are woven into the origins of the Internet as a communications medium. It will take time to improve, and will likely require eternal vigilance to make the Internet safer for those who use it.

We must also, at the same time, be watchful of the companies, governments, and individuals who leverage their ability to control content for more nefarious purposes. These are the very worries that early Internet pioneers had, and they were not wrong to be concerned. But right now, what the Internet suffers from most is not excessive policing and moderation, but far too little. Far too much cruel, even illegal behavior is excused and shrugged away. Let’s deal with wrongful censorship when we actually see it. But let’s not allow valid concerns about free speech be used as cover by trolls, abusers, and bigots.

The fight isn’t over yet. In some ways, it’s only beginning. The Internet can still be won back from the trolls.