We live in interesting times where technology is rapidly advancing as is our desire to create artificial intelligence and give life to the future robot overlords that sci-fi movies have warned us against for all these years. It’s not just sci-fi writers who are worried though. Google and Alphabet CEO Sundar Pichai also has concerns around the use of machine learning and artificial intelligence in the future, which is why he feels it should be regulated, as he revealed in an editorial for The Financial Times.
[T]here is no question in my mind that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it. Companies such as ours cannot just build promising new technology and let market forces decide how it will be used. It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone.
I couldn’t agree more that set guidelines and regulations need to be created around how the technology is developed to ensure that AI does not navigate into directions it is not intended or desired to go in. The challenge though is not just in having one or two governments regulate it, but in having the world come together to find a common way of both regulating the technology and enforcing it. All it takes is for one company or country to try to use AI to go beyond what is required for things to go horribly wrong.
And while I’m not talking about AI all of a sudden taking over humanity in some extreme scenario, but rather in the warfare where we are often expecting these robots to make split decisions on our behalf that can lead to dire consequences. Not to mention the obvious data privacy that is often exploited to allow these machines to learn. AI can bring a lot of positive development to our world but left unregulated, it could lead to invasions of privacy or worse, large scale warfare.
Last Updated: January 21, 2020