The internet is a wonderful human creation that has made the access to useful information more available to people than ever before. It also represents the worst of humanity as it has become a breeding ground for toxic conversation and hate speech as people hide behind its supposed anonymity and the belief that the internet gives them a platform to bash other people.
And while this type of behaviour is prevalent in pretty much every part of the internet, it’s perhaps worst seen on twitter where people have a tendency to use their 280 character limit to insult others rather than voice solutions to building a better world.
The behaviour of people on the internet has become a massive issue and as much as we can use it as an example for freedom of speech, the truth is that people don’t actually understand what that means. Freedom of speech means that you have a right to your viewpoint, but so do other people. Belittling others because of their viewpoint or life choices is actually not part of that right. A perfect example of which came out in South Africa over the weekend.
It’s also something that Twitter is looking to address, as the company posted yesterday how they are in the process of developing a new policy to address dehumanizing language on twitter saying that “Language that makes someone less than human can have repercussions off the service, including normalizing serious violence”.
The post then goes on to outline what defines dehumanization to them, which they outline as follows:
Dehumanization: Language that treats others as less than human. Dehumanization can occur when others are denied of human qualities (animalistic dehumanization) or when others are denied of human nature (mechanistic dehumanization). Examples can include comparing groups to animals and viruses (animalistic), or reducing groups to their genitalia (mechanistic).
Identifiable group: Any group of people that can be distinguished by their shared characteristics such as their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, serious disease, occupation, political beliefs, location, or social practices.
Now I’m not surprised Twitter has decided to address these issues as Facebook has also started to discuss similar concerns, though at the same time, faced criticism for its moderation rules. It will be interesting to see how Twitter polices this though because while it’s easy to have bots flag potentially harmful words, it can be so easy to take things out of context and make it difficult for certain people to express themselves. Considering so much of the conversation on a platform like Twitter can cover pretty harmful topics, it’s also likely to keep moderators unbelievably busy trying to source through all the posts and by the time offensive tweets are identified, chances are the damage has already been done.
I think it’s a step in the right direction for the company who has long been far too lax with its moderation policies, but could also have an impact on its user base and see people potentially moving away from the service (is that really such a bad thing though?).
However, it’s not a done deal just yet as the company is allowing for public comment to address user concerns – and may clarify some of their policies further based on those. The comment period is open until October 9th, so if you have anything you want to say, be sure to get it in before then. Though, given the vociferousness of the average Twitter user, I’m sure voicing your opinion is something you’re fairly comfortable with.
Last Updated: September 26, 2018