As much as we can look at the internet as one of the founding achievements of human innovation, it also shines the light on much of our depravity and has given many with criminal and evil intentions an unintentional spotlight. One of those current scourges of the world is child sex abuse, which has sadly found an audience in a bunch of sick, depraved people that happen to frequent the internet. Where is Thanos when you need him to wipe half of us out?
It’s a problem which tech companies are constantly trying to fight to rid the world internet of its horrible content. Most tech solutions work by checking images and videos against a catalogue of previously identified abusive material. (E.g. PhotoDNA, developed by Microsoft and deployed by companies like Facebook and Twitter.) This sort of software, known as a “crawler,” is an effective way to stop people sharing known previously-identified CSAM. However the problem here is that it can only catch material known as illegal. For material that is not yet known to be illegal in nature, they still require the help of human moderators.
Now as you can imagine, not only is this a very reactive process where by the time moderators find the material its likely been downloaded and viewed by far too many depraved individuals, but it is also particularly horrific for those moderators who have to get exposed to this obscene material. Thankfully Google is working on a new AI tool which is designed to assist this moderation process by scanning content across the internet and bringing it to the attention of moderators a lot faster. While it doesn’t remove the trauma of the human moderation process, it does a least mean that moderator can get through a lot more relevant material (claims at over 700% more) without moderators needing to spend time just watching generic porn and other material.
Fred Langford, deputy CEO of the Internet Watch Foundation (IWF) spoke to The Verge about the new tools from Google and how it will help in the combat against this type of behaviour and how it will “help teams like our own deploy our limited resources much more effectively… A few years ago I would have said that sort of classifier was five, six years away. But now I think we’re only one or two years away from creating something that is fully automated in some cases.”
It’s still not an ideal situation for the people involved, but the idea behind the tool is that as moderators go “yes” and “no” when viewing the content it will start to learn and distinguish these things for itself and hopefully in the future be able to fully automate the process and ban the content as early as possible.
While I think this is a great use of technology for Google who has developed this for free, hearing stuff like this still makes me wonder if perhaps the world might be a better place if we just switched the internet off entirely. I mean, I like the benefits, but more children should not have to suffer because of it. We might fear our future AI robotic overlords, but if they can wipe out this part of humanity, I for one welcome them.
Last Updated: September 4, 2018