“The first ultraintelligent machine is the last invention that man ever needs to make, provided that the machine is docile enough to tell us how to keep it under control.”Irving J. Good, 1965
“The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” – Irving J. Good, 1965 (life 3.0)
Would it be possible for humans to build machines more intelligent than themselves?
We are convinced it is, and that it might happen in this century.
Artificial General Intelligence (AGI) was described by Ray Kurzwell in his book The Singularity is Near: When Humans Transcend Biology as the “Creation of systems that carry out specific ‘intelligent’ behavior comparable to, or greater than, that of human beings”.
The realization of AGI will be irreversible, because the AGI will be able to improve itself. It could probably cause an explosion of intelligence that has an extraordinary impact on human society and the world at large. How will the world look like after AGI? Will a balanced harmony between humanity and AGI be possible? Or are utopian or dystopian world scenarios more realistic? The goal allignment between humans and AGI will be central to the future it will create. This article explores the development of AGI, its possible implications and most important, how to make sure to steer these developments into the right direction. Furthermore we try to convince the reader of the urgency of the problems that AGI can create and that we need to act now.
Current state of AI
Artificial Intelligence (AI) advances at a rapid pace. With new developments mainly in the research field of ‘Deep Learning’ in the last years. AI has shown incredible performance on for instance playing Alphago, creating synthetic media and chatbots and is already performing better than humans on many tasks.
New algorithms like openAI’s CODEX are now even able to generate code from instruction text, various music styles have been dreamed up by generative models. Nevertheless, we are still far from an AI that is as smart as humans on every metric.
The image above shows different AI tasks as a mountain illustration. (Life 3.0). The water is where AI is at now. The higher up, the harder the task. When AI is able to do AI-design an AGI will be close.
To compare the current state of AI to human minds we could ask questions like how does our brain compare to a computer?
“Hans Moravec estimated the how much computing power the brain has by making an apples-to-apples comparison for a computation that both our brain and today’s computers can do efficiently: certain low-level image-processing tasks that a human retina performs in the back of the eyeball before sending its results to the brain via the optic nerve. He figured that replicating a retina’s computations on a conventional computer requires about a billion FLOPS and that the whole brain does about ten thousand times more computation than a retina (based on comparing volumes and numbers of neurons), so that the computational capacity of the brain is around 10 13 FLOPS—roughly the power of an optimized $1,000 computer in 2015” – Life 3.0
This is of course no exact calculation because some other computations might be done way more efficient in the brain then in current computers.
3 big questions of AGI
There are 3 big questions regarding AGI; first will it happen, second when will it happen, and third what will happen after. If you are convinced, like we, that the brain is the most sophisticated information processing computer which contains nothing supernatural but the laws of physics, then it should be possible to create an AGI (leaving aside the discussion of consciousness because we don’t know much about that).
The second question is a hard one, and we can only estimate “How hard is it to build an AGI system?” Which is an open question to anyone. It is difficult to make a prediction here because a single breakthrough can already lead to a huge advance in the realization of AGI. As MIT professor Lex Fridman puts it: “Even if our current systems seem limited in for instance reasoning, the exponential potential growth of technology could mean that just around the corner is a singularity, a breakthrough idea that will change everything”.
In addition, “The fact that every decade over the past century, our adoption of new technologies has gotten faster and faster. This means that the moment a new idea drops into the world, it can have widespread effects overnight.” As Lex Fridman describes using the following chart below.
Scientists and AI experts differ on whether and how far into the future an AGI will be built. In 2019, 32 AI experts participated in a survey about expected introduction of AGIs. The results are shown below which showed that a whopping 62% of AI experts expected AGI to happen in this century! Pesonally, we are convinced that AGI would definitely be a possible scenario for this century.
The world after AGI
So what would the world look like after AGI was invented? There are many possible outcomes depending on how the AI is developed. Let’s imagine three different worlds, based on the book Life 3.0 from Max Tegmark: an Utopia, a world with the amount of suffering like ours and a Dystopia.
In this scenario, humans peacefully coexist with technology and sometimes merge with it. Life on earth and other planets is more diverse than ever. The invention of AGI caused a rapid explosion of technological developments. Climate change is solved by advanced geo-engineering. Humans don’t have to work anymore, but have endless options for leisure, self development and have a lot of time to engage in their communities. Everyone can fully define their own purpose in life, without being dragged down by boring jobs. Advanced space travel has made the galaxy flourish with life. Machines, humans and cyborgs peacefully live together. Because goods can be made so efficient, the only thing having real value is land. And because most of the land is owned by humans it is incredibly expensive for machines to buy property and this makes humans very rich, although not as rich as machines.
An AGI is created for the sole purpose of intervening as little as necessary to prevent the creation of another superintelligence. The AGI makes all kinds of inventions to improve itself, but does no innovation outside of that. Therefore, humanity does not benefit from its ability to solve all kinds of problems in areas such as health care, technology, and climate change. But the AGI makes sure that no dystopia is created by another AGI.
In this scenario, an AGI has taken over the world with the goal of maximizing happiness with minimal energy usage. The AGI has figured out that the best way to do this is to put people’s minds in virtual reality worlds while their bodies are fed and they are continuously fed drugs to make them experience ultimate bliss. Because their bodies don’t have to move this solution is on the pareto front.
How to steer AGI
“Is it gonna be better or worse to be human in 20 years? if we don’t double down on AI safety research its gonna be worse, but if really work hard it’s gonna be awesome.”Max Tegmark
Most governments fail to have a long term strategy to steer AGI. Also other institutions like universities, in our opinion, fail to contribute enough on this topic. There are many reasons for this among the fact that it is a hypothetical problem somewhere in the future, institutions not being aware of AGI, because of the urgency of some concrete AI problems made today and the naive thinking that the risks are just overestimated or not relevant. However we are convinced that because of the huge consequences AGI will bring, the risks (as in engineering terms chance times effects) are so great, that it should be placed next to the other most urgent topics like climate change or extreme poverty.
We think that in order to steer towards an AGI that creates the best possible world a few things need to happen.
First, there will need to be thoroughly more investment in AI safety research. AI safety research might be one of the most important professions of the 21st century, because an AGI with misaligned goals or bugs could be catastrophic for humanity. But security research and engineering is also hugely important for current AI applications. “Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid.” As stated by Future of Life Institute.
Bugs in AI can be very unexpected. For instance if you add specific kind of noise to an image and put it into a CNN classifier, it will classify the image as a different category as shown in the figure below. Therefore these CNN’s can be ‘hacked’ pretty easily.
Luckly funding in AI safety research has been picking up in the last few years, mainly thanks to some new institutions like “Future of Life”, that are backed by prominent investors like Elon Musk.
Second, investing in research on AI safety can increase the likelihood that AGI will end up in the right hands. Autocratic governments like China, could use AGI to turn the world into a digital dictatorship. We believe that universal human rights should be the starting point rather than cultural relativism. And we might logically conclude that some countries or institutions like the Netherlands are better at providing these universal rights than countries like Saudi Arabia. It should be noted that companies like Deepmind are also in the race to AGI and it also has significant chances of achieving it first. For the best possible outcome for humanity, an institution that values universal human rights above all else should develop AGI first.
Third, we must make sure that the goals with an AGI are in line with our goals. If the goals of an AGI are not in line with those of humanity, life could become a horror, or humans will be eliminated for the simple fact that they are an obstacle to the AGI’s goals.
But how to align goals? Human life could be seen as a game where a very complex reward function needs to be optimized. Various techniques like reverse reinforcement learning have been proposed to determine the reward function of humans. This reward function could then be used to train and evaluate AGI’s. Furthermore we should make sure the AI’s are very robust and fault tolerant.
As part of the goal alignment between humans and AGI, the goal that AI-generated wealth makes everyone better off should also be considered. After all, future AI developments have the potential to create vast amounts of wealth. It would be a shame if all this wealth ended up only in the hands of a lucky few. Fortunately, even with massive inequality between rich and poor, the poorest are likely to get richer. This trend can be observed in history; during the 20th century income inequality increased, but the living standard of the poor went up as well. As stated by Action Against Hunger: “The proportion of undernourished people in the world has declined from 15 percent in 2000-2004 to 8.9 percent in 2019.”
But that doesn’t mean we shouldn’t do something to make the world more equal. We believe that AI will replace many jobs, because where new jobs will not be available, many people will become unemployed. Therefore, a new system must be invented to provide these people with wealth and the right to a chance for a meaningful life. We see universal basic income as the most promising idea for this; where everyone gets an unconditional share. However, we do believe that capitalism will be the best way for innovation, and generates the most wealth, but that inequality is inevitable.
Finally, if we were to use capitalism as a means to generate AI innovation, our proposal is to assign value to AI safety, and have governments or relevant institutions enforce this value. In the same way that we place value on clean air. There are increasing costs associated with polluting common goods like clean air which are enforced by governments. The challenge here lies in the fact that AI security is more abstract. It would therefore be valuable to establish concrete metrics for AI safety. Apart from the fact that it should be more prominent on the political and scientific agenda, everyone should ask themselves how they want to see the future.
In conclusion, we have tried in this article to point out the as yet little-known challenge facing humanity in the field of AGI. On the one hand AGI has the potential to solve most of humanity’s problems. But at the same time it is humanity’s greatest threat to extinction. We think that AGI could arrive this century and that the consequences could be huge. We argue that action must be taken now to prevent the future of humanity from being at stake. By investing big in research on AI security, to get AGI into the right hands, make sure the goals between humans and AI are aligned and by envisioning long term scenarios for the future.