Google creates, then disbands its AI Ethics board in just one week

2 min read
0

I think people generally agree that before we go too far down that rabbit hole that is Artificial Intelligence that we need to take some form of broader ethics call. After all, the ramifications of handing over absolute power to our robot overlords are scary and no one wants to be woken up from the Matrix to only then realise that it is too late.

Which is why Google recently made a decision to form an AI Ethics Board to try to provide direction as to what should and should not be done in the machine learning space.  Only that didn’t last long as Vox has reported that after one week, the board has already been disbanded.

According to the report, the disbandment is largely due to outcry over the board’s inclusion of Heritage Foundation president Kay Coles James, a noted conservative figure who has openly espoused anti-LGBTQ rhetoric and, through the Heritage Foundation, fought efforts to extend rights to transgender individuals and to combat climate change. People apparently felt that having someone whose prejudice is well known, could lead to a bias in the algorithms used in machine learning and artificial intelligence.

The goal of their advisory board, called the Advanced Technology External Advisory Council (ATEAC) was ostensibly to inform Google’s AI work and to ensure it was following its AI Principles, set out last year by CEO Sundar Pichai after revelations the company was participating in a Pentagon drone project that made use of the company’s machine learning research. The board included a number of prominent academics in fields ranging from AI and philosophy to psychology and robotics, the problem though is that it also included those with policy backgrounds, like James and members of former US presidential administrations. A political stance that essentially took the neutrality of the board away and one which Google eventually realised was against its intended purpose.

There is definitely still a need for some sort of ethic board around AI and it will be interesting to see where Google goes from here. Perhaps they should team up with several other tech companies and industry experts to form a more independent committee to tackle this globally, or at least across the US. It’s probably not feasible at the moment, but with the growing concern over data privacy and the potential data that can also be made available to computers for learning, there is probably enough political will to make this happen in the not too distant future.

Last Updated: April 7, 2019

Check Also

Amazon is developing an Alexa device that reads people’s emotions

Amazon is working on a device that can read people's emotions. Something which could save …