What about it? In his book Life 3.0 Max Tegmark writes about AI in a very compelling way. It is a really fantastic book about AI and what it will mean in our lives to come. How it will grow into our lives and how it will change our society in the coming years.
AI integrated in our new developing technologic evolution. AI will grow like a tree or it will blow in our faces. From a primitive sort of AI to a very complex and integrated AI who is helping and supporting Humans in their daily lives and to prevent horrible mistakes, as we did in the past.
I think if we look at how our brains are build, so we will structure AI. AI be build like our brains. We can’t build huge complex AI systems just from scratch without making the mistake loosing ourselves in uncontrollable complexity which outcomes we can’t predict. If we have the patience to build AI layer by layer, like our brain has been build, we can keep track of the complexity of the AI to come.
Max Tegmark writes in his book Life 3.0 that he is worried about AI-safety and he made a lot of effort to make thousands of people in the scientific world aware about AI-safety. This is of course a very good and great initiative, but I think this will stumble on the shortcomings of us as Human beings : we can’t live in peace, we are losing ourselves in struggling for power and money, we will keep killing our own species already for thousands of years in wars, ethnic cleansing and so on. Our problem is not the AI-safety; our problem is who we are as Humans, our shortcoming in our own brains. We are incomplete. There is the main problem for AI-safety in the years to come. We Humans have proved, as long as we walk on the face of this planet that we can’t.
We will build AI or AGI’s with the same faults as we are as Humans. We can’t make God’s as Humans because we are just animals with some Neo-cortex, which came from a strange path in our evolutionary past.
If we want AI systems to protect us from horrible mistakes like trying to destroy our self with nuclear annihilation, we have to build AI/AGI’s fast and now. As you can see in the last developments in the political scene of this last year, there will always a strange human-mutant who wills no doubt to use the nuclear-power to destroy another country and also him. And if there are AGI’s or something like that, these human-mutants will use their power to build in a red-button to overrule those AGI’s.
Who will have the power then and who will have the control then to check and override that red-button?
AI’s and AGI’s will maybe not a problem, we as Humans are the problem and we will misuse AI’s and AGI’s as we are misuse weapons, misuse power, misuse money. Suppress our fellow Humans, misuse our fellow Humans and kill our fellow Humans.
We will never learn, not from history not from faults which are certainly will come in the near or far future.