What in fact happens when you report an abuse on Twitter? How to deal with abuse on a network that is first and foremost, free, and secondly, not very policy-protective?
Almost three weeks after abandoning Twitter for receiving abusive messages following her father’s death, Zelda Williams returned. Mary Beard, academic and TV historian, fights online abuse by writing a job reference for a person who abused her. As usual, media covered both stories for as long as they were interested, meaning, media hype didn’t last long. We lost interest in the subject of online trolling and abuse. We have already forgotten what happened, and this is precisely why I am writing about it.
Twitter is facing many challenges when it comes to protection and safety, and perhaps, befriending your abusers seems (in the most unusual way) as a better method than following Twitter’s guidelines. But for whom? When someone like Mary Beard does that, it may be a solution, at least in the case of that particular university student, who should not suffer for “one moment of idiocy”. Can anyone else win their trolls round with talking or job reference? Not slightly.
A report from 2013 revealed that 72.5% of people who are abused online are female. Female or not, that is a big number. Is talking about abuse on Twitter becoming mainstream communication? What about the huge number of people who suffer the same abuse as public figures, the abuse we know nothing about?
What’s Twitter’s solution for that?
Twitter does have a system that lets users report abuse and even get bad accounts taken down (last year the company introduced a Report abuse button, after Caroline Criado-Perez, the woman who lead a campaign to put Jane Austen’s image on the 10-pound note, faced a flood of abuse online), but what seems to be stopping Twitter from actually protecting its users?
Twitter’s policy is somewhat unclear and undetermined on the very definition of an abusive behaviour online. Considering it is a free network, let’s not hold against them the overly usage of may not in their policy, however, although we do know that physical threats or posting from multiply accounts is considered abusive, they do not specify any behaviour at all.
Twitter however, has an in-depth explanation on when to report the abuse, or it would be better to say, to recognize the abusive behaviour. But, you must admit there is something wrong with these explanations.
„Abusive users often lose interest once they realize that you will not respond.”
“If the user in question is a friend, try addressing the issue offline. If you have had a misunderstanding, it may be possible to clear the matter up face to face or with the help of a trusted individual.”
The process of reporting abuse is also very slow, and very often ineffective, while Twitter discourages third-party reports:
“We are unable to respond to requests from uninvolved parties regarding those issues, to mitigate the likelihood of false or unauthorized reports. If you are not an authorized representative but you are in contact with the individual, encourage the individual to file a report through our forms.” And why is that?
A month ago, CNBC interviewed Twitter’s CEO Dick Costelo and called on users to ask question, and the questions were mostly directed to safety, following questions such as “Why is reporting spam easy, but reporting death and rape threats hard?”, and “Why rape is not a violation of your ToS?”. CNBC reported that more than 28 percent of submitted questions were related to harassments and abuse. It all comes to this: block, unfollow and leave Twitter to call the police.
— Carl Quintanilla (@carlquintanilla) July 29, 2014
So what can we do? It seems “the power of the people (read: users)” is a potential solution. Block Together is an app “intended to help cope with harassments and abusers on Twitter” allowing users to share their lists of blocked users with friends and auto-blocking new users who @reply them.
Flaminga is another app that helps users create mute lists they can share with other users, even allowing a user to mute Twitter accounts created recently (although Twitter also has a mute button), while the Block Bot identifies Twitter’s “anti-feminist obsessives”. While this is a great initiative by users that can pave a way to creating a different approach to dealing with online harassment, and perhaps even community, without Twitter’s cooperation, the problems is not addressed properly, still in the level of Twitter’s solution.
Twitter is not alone in this, Facebook and Google are very often under fire for not responding properly to online abuse and harassments. But, that’s precisely the case: why aren’t they capable of responding properly, when these networks are built with main focus on users. Maybe they should hire more people who will address these social problems, as Danielle Citron, the law professor at the University of Maryland, proposed regarding recent Facebook’s law suit.
Some say, however, that Twitter is the only platform with such an intensely disruptive behaviour given the very nature of the network, as Mike Elgan thinks, for instance, emphasizing the nature of conversations on Twitter (structural problem as he says) and lack of tools to prevent and stop trolling, abuse and harassment.
At this point, all we can do is, in fact, write about it, and remind people that online abuse and harassment are, unfortunately, common and omnipresent and that writing and talking about that can bring heightened attention, if not to Twitter, than to the users.