Those suffering abuse, threats, or generally unpleasant behaviour on Twitter – of which is there is much – may welcome Zero Trollerance, an initiative that aims to tackle trolls by bombarding them with tweets containing helpful life tips and advice on how to be less angry and aggressive.
A bot scans Twitter for accounts regularly spewing sexist, racist, or otherwise offensive tweets and floods them with tweets with a “six-step” plan to relieve themselves of their anger and aggression, including links to self-help videos.
The project’s creators, Berlin-based collective Peng, declare that:
The gendered forms of harassment and violence on Twitter today point to a deeper problem in society that cannot be solved by technical solutions alone. Trolls need serious, practical help to overcome their sexism, deal with their anger issues and change their behaviour.
Zero Trollerance is tongue-in-cheek, but the problem of abuse online isn’t amusing. It’s no secret that Twitter is struggling – Twitter CEO Dick Costolo admitted as much when he said he was “frankly ashamed of how poorly we’ve dealt with this issue”. Indeed, while the site does perform some degree of moderation, generally the platform has embraced – perhaps a little too tightly – a prime directive-style approach of non-intervention. The hope is that worthy and interesting comments drown out any nasty, trollish voices lurking beneath. In reality it takes very few drops of poison before the rivers of writing turn red.
Twitter’s steps in the right direction
Just last week, Twitter announced a series of measures to prevent abuse and more easily ban serial offenders. This widens the threat policy to include indirect threats and promotion of violence. It also introduces account locking procedures and temporary bans against those attacking individuals at a particular time – for example an internet “pile-on” where many strangers harass particular users. Another measure is to automatically identify accounts and tweets thought to be abusive and prevent them from propagating through Twitter, limiting their reach and the harm they cause.
This follows hot on the heels of Twitter’s quality filter, which aims to:
… remove all Tweets from your notifications timeline that contain threats, offensive or abusive language, duplicate content, or are sent from suspicious accounts.
While these measures seem sound, intended to bend the arc of the Twitterverse towards boosting signal and filtering out noise, the reality is that what they achieve may be quite different from their aims.
ra2studio/shutterstock.com
Threats
Most will have heard of Twitter rape threats, terrorism threats, murder threats, bomb threats, even revenge porn threats, and the various responses to them. However, of the many things I’ve discovered throughout my research into Twitter abuse, one is that identifying threats (or even trolls) is far more complex than it seems.
Just because someone may tweet a threat doesn’t automatically make it credible. They may lack the means or the inclination to carry it out – your friend tweeting that she’s “going to kill you” for breaking her laptop isn’t really.
On the other hand, language does not need to be explicit to be menacing. Imagine that an anonymous account tweets you every day with a description of the clothes your child is wearing: there is no overt threat in such tweets, but it is obviously sinister. Even the words “we need to talk” are enough to fill many of us with dread.
The issue here is that sometimes we say what we don’t mean, and sometimes we mean what we don’t say. Language is subtle and complex enough to imply things that even a can child understand, yet invisible to the most powerful computers and software on earth.
Offensive or abusive language
Filtering for offence is also not simple. We could choose to screen out racist, homophobic, and misogynistic terms, but in doing so, we are already imposing our moral judgements on others. Just like art, comedy, and beauty, offensiveness is largely in the eye of the beholder.
Some will consider “bloody hell” unacceptable, whereas others will be content to run the full gamut from mild cursing to the most offensive words in the English language. Who decides which words fall above and below the line?
To complicate matters further expletives are used much more widely and in far more diverse ways than just to convey insult. Filtering out the humble Anglo-Saxon “fuck” would certainly remove tweets where it is used to insult, but it would also remove those instances where it signals emphasis, humour, closeness, frustration, joy, and far more besides. In other words, profanity filters may sieve out some of the dregs, but measures like these can take some of the sparkle out of the conversational champagne too.
antoniomas/shutterstock.com
The many need protection, not just the few
Quality filter is currently only for verified users (generally celebrities or those with many followers) using Apple devices – a step that clearly excludes the vast majority of Twitter users. And in any case, verified users are not blameless when it comes to abuse. Even the affable Stephen Fry once unwittingly triggered a dogpile from his legions of followers when he responded to a critic. In other cases, famous Twitter users appear to have deployed their followers as pitchfork-wielding mobs.
Somewhat ironically, the greatest protection from abuse is currently offered only to those Twitter users who typically already have considerable power through their many followers – and often the means to pursue legal action. This is not to say that they are not also prime targets of abuse too, but that simply no user should be subjected to any scale of abuse that they are not equipped to deal with. Users should not feel compelled to befriend, beseech or threaten their attackers.
One final aspect that the quality filter cannot address is that by its very design, the Twitterverse is an oddly blinkered echo-chamber. Unlike some other social network formats, it’s possible for ten thousand users to each reply to a tweet without ever being aware that anyone else has responded. The result is that any one of us could, for one unwitting moment, end up part of a pitchfork-wielding mob that is burying someone else alive under an avalanche of online wrath.
No comments:
Post a Comment