That AI will change everything is certain. In fact, there will be a necessity for universal basic income in the future. Certainly, for most jobs, humans will not be able to compete.
It is tempting however, to think such technology will only conquer rudimentary fields of work that are easily automated. Disciplines like accounting, therefore, naturally come to mind.
Surprisingly, even most complex professions like law, medicine and research are fair game.
For instance, as of today only, there are a lot of breakthroughs in medical diagnosing using Artificial Intelligence.
So much so, I think it ‘s fair to say, the next great doctors are indeed AI algorithms. Ones you will sometimes download on your phone.
A new generation of technology is thus born, with the idea of computers able to think like humans, but, at the speed of light. Very scary, indeed !
What is even more terrifying, is the thought of machines taking over. This, in fact, has been the focus in many recent debates on Artificial Intelligence, about the so-called singularity.
I believe there is substantial reason to worry. Although, the fact remains, we are far from singularity. At least for the next 20 to 30 years, machines will not catch up with human intelligence.
What about Moore’s Law ? I mean, isn’ t AI, like any other current technology, subject to exponential improvement ?
I hate to say it, Moore’s Law is not a thing anymore. It is evident we are approaching a brick wall. That the rate of technological progress is slowing down. Silicon computing is indeed nearing its limit.
At some point, the Heisenberg Principle will take effect. It will then, be impossible, to further improve current computing systems.
There is, hence, an urgent need to solve the next great computing problem, whether it will be a Quantum Computing solution, which seems very unlikely, given interference and noise issues.
The fact remains, Artificial Intelligence must be regulated at once, not just for singularity, but in light of the most imminent danger it presents in the era of fake.
Fake news, fake news, I have been hearing a lot lately. Not a big deal, I mean, until AI is used to forge speeches and videos. This, in my opinion, is the most pressing danger of the current technology.
Just recently, I watched an interesting video of Barack Obama about Deep Learning in an MIT class. Check it out: https://www.youtube.com/watch?v=l82PxsKHxYc
Interestingly, I could not tell at first the video was fake. The algorithm was not only able to produce a realistic video of Obama’s physique and mannerisms, but it also managed to imitate the voice of the former president, to a surprising degree of accuracy.
Thus, I came to the realization that with today’s AI, it’s getting more and more difficult to dissociate truth from fallacy.
Imagine a world where Deep Learning algorithms are used to frame people for crimes they did not commit. What about utilizing them to harm the reputation of political opponents ? Or to kill the business competition ?
Imagine your worst nightmare, for instance, waking up one day and seeing indecent videos of your naked self, circulating in the social networks, when you are sure that was not you, but no one seems to believe you.
Even more, this technology equips frauders with state-of-the-art tools to scam even the less gullible.
Some might argue that in the end, it does not matter. That we have been able to create fake pictures using Photoshop to very little damage.
I think it ‘s different. This is on an entirely new level.
You can learn to tell photoshopped pictures from real ones. But with AI-faked video and speech, the line between truth and fallacy becomes blurry.
Thus we are left to ponder:
What should be done about this ?
What is the best way to regulate such dangers of AI technology, in light of the era of fake ?