Super intelligent AI will create ideas and ways of thinking that humans cannot comprehend. We are here to tell you why this might be a good thing.

(Credit: Big Think)

It is likely that at some point in life, you have struggled to understand something, whether it be a complex equation, or your uncle’s bizarre political beliefs. Regardless of what it was, it surely was possible to understand in the end. For instance, you may have begun taking math lessons, or spent more time reading-up on the perspectives of others. However, what if you saw or heard something that was totally beyond your ability to comprehend it? For example, ideas in the field of science which we couldn’t grasp no matter how hard we tried. This might be hard to imagine, with humans being at the top of the intellectual food-chain and all. How could there be concepts and ways of thinking that we cannot understand? 

We are here to tell you why we think that this lack of comprehension could actually be interesting for humanity, and that these potentials are waiting to be discovered by super intelligent AI. In his book, Superintelligence: Paths, Dangers, Strategies, Nick Bostrom proposes the following definition for super intelligence: “intellects that greatly outperform the best current human minds across many very general cognitive domains.” You could view this development as being either scary or exciting, depending on your perspective. Either way, even though we are still far off from reaching superintelligence in the form of AI, in our opinion (and in the opinion of some of Tech’s most prominent figures) it’s very likely that it will happen – at some point in the future. But what could this mean for humanity if we create such vastly intelligent AI? In our history as humans, we have never encountered nor created anything with greater intelligence than our own. Therefore, it is quite hard to imagine what that could even look like.

Nick Bostrom talks about what will happen when technology becomes smarter than humans.

To begin, when we refer to humans not being able to comprehend something, we refer not only to a huge gap in knowledge but also in intellectual ability. Astrophysicist Neil deGrasse Tyson once made a great analogy to help imagine what a discrepancy in intelligence may look like: comparing the intellect between a chimpanzee and a human. For instance, imagine showing a sheet of paper containing complex mathematical formulas to both a chimpanzee and to a human. Even if the human is not well-versed in math, they will still be able to recognize it as such and they will have the capacity to be taught how to solve the problem. On the other hand, the chimp would only see squiggles on a page and would literally not be able to know ‘what it does not know,’ in essence. To the chimp, this mathematical formula is totally incomprehensible. 

When you search online what the media thinks of superintelligence, it seems like nearly every article you read concerning the topic refers to a potential doomsday scenario. There are near-certain claims that once AI becomes smarter than us, it can and will, take over the entirety of the world, leaving burning flames and screaming children in its wake. Let it be clear that we do not aim to discuss the potential threats and dangers of the technological singularity, because we believe that there are many opinions on this out there already. Rather, we have chosen to focus on the benefits such incomprehensible super AI could bring for humanity, with an illustrative focus on the field of theoretical physics and why the unknown super AI brings, does not have to be all bad.

Super intelligent AI will help us make discoveries in fields such as physics, even if we cannot comprehend the results.

In an article for TED, Kevin Kelly states: “if we want technology to progress by leaps and bounds, we must make AI that is like nothing else on earth.” He argues that in order to solve some of humanity’s hardest problems, it’s possible that we will need to manufacture new intelligences besides that of humans.

(Credit: agsandrew/Shutterstock)

If such an AI was created, what are some of these ‘hard’ problems it would be solving? Let’s take some of the biggest mysteries in theoretical physics as an example. Concepts such as quantum gravity, dark energy and string theory continue to boggle the minds of the world’s smartest scientists. Although human intelligence has gotten us to the point that we can theorize about these concepts, it hasn’t (yet) allowed us to know for certain that these theories actually hold. It could very well be that in order to solve the grand mysteries of these phenomena, we may have to look beyond our current intelligence. And what of the extremely difficult questions? The questions that go beyond even those known to theoretical physics? Well, as stated by Kelly, here we might have to go a step further and invent other intelligences that can help us design even more sophisticated intelligences. His theory is that we as humans must purposefully create intelligences that biology could not evolve, to make machines that “think differently than any human scientist.” He claims that this is the only way that we will think about science differently, to see it from another perspective. “The alienness of artificial intelligence will become more valuable to us than its speed or power,” he says.

Although for some this may seem rather far fetched, he is not alone in making such convictions. Other researchers and writers alike think that superintelligence can be the key in helping us make new discoveries and solve the problems that we as humans currently cannot. Researchers at the Institute for Artificial Intelligence and Fundamental Interactions, an M.I.T.-based institute, are showing the potential of what current AI is capable of in the field of physics. As said by physicist Max Tegmark, they are hopeful that super AI can “discover all kinds of new laws of physics,” as “we’re already shown that it can rediscover laws of physics.” 

In his article for The New York Times, Dennis Overbye talked to theoretical physicists on the potential of AI one day solving the Theory of Everything – the hypothetical framework explaining all known physical phenomena in the universe. The conclusion is that it might be possible, but not anytime soon. And there is no guarantee that we humans will understand the result. This leads to an interesting question concerning this topic: what is the point in making such discoveries with the help of super AI if we cannot understand them? 

To answer this question you need to think of what it is you are trying to prioritize. It seems that the highly debated and potentially controversial answer to this question would be that comprehension doesn’t always have to be the end goal (especially not in the field of theoretical science it seems). Sounds counterintuitive- how is understanding something not equally important to discovering it? If you cannot understand it, how could you utilize it? Well, it turns out that when you ask a bunch of theoretical physicists why scientists should bother finding such answers, they have some pretty interesting perspectives. Overbye interviewed a number of them to see what their views were on superintelligent AI and its ability to discover things we may not comprehend. In the article Cosmologist Micheal Turner stated that: “it ultimately didn’t matter where our ideas came from, so long as they were battle-tested before we relied on them.” In line with the potential for physics, Steven Weinberg touched upon the possibility that humans might not even be smart enough to understand the final Theory of Everything and that this is “a troubling thought.” Yet, this discovery seems to be the ultimate dream for the field of theoretical physics, so not understanding the answer sure won’t stop them from looking for it. Some interviewees had an issue with the whole idea that AI would discover something too deep for humans to comprehend. Theorist Nima Arkani-Hamed, stated that “This doesn’t reflect what we see in the character of the laws of nature.” But if new laws of nature are discovered, which researchers like Dr. Tegmark hopes AI can, would this still hold? This of course is serious speculation on our part, but it goes to show some of the interesting ways one could think about it. In Overbye’s article, all of these researchers comment differently on the topic. However, one thing was the same: they are working towards making new discoveries in the field of theoretical physics and they almost all use AI to do so. 

It’s still a far cry to say that this means that we need intelligent AI to solve the difficult questions posed by physics, however, it’s entirely possible that it will get there first. In 1980, Stephen Hawking argued that the Theory of Everything might be achievable, but that the final touches on it were likely to be done by computers. “The end might not be in sight for theoretical physics,” he said. “But it might be in sight for theoretical physicists.” To tie this back to what Kelly was saying, allowing intelligent AI to create something so vastly different to the current scientific perception might very well be the best way to move forward, by challenging current ideas with the ‘alienness of AI.’ Thus, we leave you with this idea: do we always have to comprehend something to benefit from it? 

Nonetheless, there are things we need to look out for.

The prospect of an incredibly intelligent AI seems to be profitable for the field of science. Although illustrated in terms of theoretical physics, such discoveries can benefit society on a general level too. Who knows, perhaps such a super AI could invent more efficient forms of travel using the new found discoveries in physics, like a super-fast airplane. And for curiosity’s sake, the possibility of uncovering answers to the world’s mysteries with the help of an ‘older sibling’ form of intelligence sounds quite appealing. According to some however, this prospect of building intelligence that is superior to ours seems impossible. More importantly, some think that it simply shouldn’t be allowed to happen. Admittedly, their concerns are not unwarranted. According to computer scientist Stuart Russel, one of the biggest threats associated with super intelligence moving beyond our capabilities is that humans will no longer be able to control it. If something works in ways we cannot even imagine, with the processing and intellectual power far greater than ours, who are we to tell it what to do? 

Robot from Castle in the Sky, a Studio Ghibli film.

But if we just make a ‘nice robot,’ surely that would be enough to ensure that it will not take over the world? Russel elaborates that it isn’t quite that simple. He agrees with the mainstream thinking that if AI were to become the dominant form of intelligence on earth and had the ability to reason, it could quite easily change its own goals without us knowing. Philosopher Nick Bostrom shares a similar concern, he illustrates with a perhaps silly example: if you were to give a superintelligence the goal of creating as many paperclips as possible, they may invent ingenious ways to do so. Even if the robot is programmed to “not harm humans,” it may inadvertently still do so by depleting the earth of resources, rendering our need for such vast amounts of paperclips totally useless. In other words, because the process of making an extremely intelligent being ‘human controlled’ is so complicated – try controlling something that will outsmart our every move – it would take tremendous care on behalf of the creators to define what this exactly means. This will be especially difficult because we have never had to counter beings with intelligence above our own. Moreover, if we cannot comprehend the AI’s mechanisms, how do you go about controlling it? As of now, there doesn’t seem to be a way to totally guarantee that super AI would do what we want it to.

Furthermore, Phil Torres from the Bulletin of the Atomic Scientists, claims that one of the dangers of superintelligence is that we as humans tend to anthropomorphize it – meaning that we project human mental properties onto AI systems and that we think they will think in the same way as we do. He explains that the ‘cognitive architecture’ of AI could very well be completely different to that of humans – which has been shaped over millions of years of natural selection. This difference in ‘ways of thinking’ is one of the reasons why many experts are worried. This is because a super AI, with a completely different cognitive architecture, would behave in ways that we are fundamentally unable to predict or understand. And as said by Nicholas Carleton in his 2016 study, “the fear of the unknown may be a, or possibly the, fundamental fear.” This fear of the incomprehensible is understandable, as an earthly being with higher intelligence than us would be entirely new to humanity. Therefore, if such an AI were to bring about new concepts and ways of thinking previously unimaginable to us, there is a real possibility that no one is even open to the idea of exploring it, rendering it useless.

If we exercise caution however, these risks can be minimized.

Apart from this, the world seems to have enough problems on its own. Why add on additional troubles in the form of super intelligence? We argue the emergence of super AI is not guaranteed to be the doomsday scenario some make it out to be. For starters, it is indeed true that such AI would think and function in ways we cannot predict. It may come up with insane ideas, losing sight of the original goals and perhaps putting the value of a paperclip before a human life (we are joking). But, who is to say that we give them permission to actually do so? Science Times further elaborates that the key to safe superintelligence lies in the level of autonomy, or control, given to them. Although many envision super intelligent AI as an anthropomorphic robot which can walk around, and mould the world to its liking (while charming us in the meanwhile), that doesn’t have to be the case. Meaning, such AI could exist entirely in the cloud for instance, to be “communicated” with through a computer programme, never seeming more human that a simple chatbot. By limiting the amount of control this AI is granted, it may theoretically be limited to an ideas generator: outputting information and even simulations, rather than having the actual autonomy to go out and take over the world. 

In line with this, one may take the stance of “no harm, no foul.” Meaning, what is the harm in creating such an AI to generate ideas, if we safeguard it from actually doing bad things? Even if we do allow it to perform actions, according to Russel, the implementation of an “AI kill switch” may be our key to control. Such a mechanism would be created upon initiation of the AI, allowing for humans to at any moment shut the system down. Can we then be sure the AI doesn’t outsmart us? In short, we can’t be entirely sure. Thankfully, researchers have been looking into this for nearly a decade now, and are devising plans to ensure human control. Regardless of such efforts though, as it will be more intelligent than us entails the real possibility that it could be detrimentally clever. Therefore, no matter the stage or the end goal, it is crucial that super AI is created with great care.

“Monkey see, monkey can’t.”

You may ask: Why not let human scientists pave the way and ignore the prospects of such potentially dangerous AI? We argue that although it may seem frightening, we can certainly benefit from things that we do not comprehend. To begin, think back to the chimp analogy we made earlier. Although they cannot comprehend human-level technology, they benefit from it greatly, such as through medical advancements. Allow us to introduce you to Bili, a chimpanzee living in Wales, who was found to have a lump under his chin. Due to generous medical directors, he was afforded the rare opportunity to undergo a CT scan to check it out. Thankfully for him, he was given a clear bill of health as the lump was benign. However, poor Bili must have been quite confused and perhaps even fearful when he was taken to an unknown, bright, fluorescent hospital room to receive his care. Although he certainly knew humans could be friendly, as he is apparently gifted two dozen baked apples on a near daily basis, he had no way of knowing what the machine’s intentions were, or how it could help him. Despite his lack of understanding, it easily could’ve meant the difference between life and death. 

The AI evangelist would say that this super intelligent AI, like Bili’s CT machine, could be our saving grace. Where we, as humans, are their version of ‘Bili;’ receiving help in ways we do not understand. To further illustrate, think of the basic laws of gravity. Bili for instance, may have a very basic understanding of this concept: He knows that if you throw a banana high in the air, it will fall back down to the ground. However, we are willing to bet that he couldn’t explain Newton’s law of gravity and all of its complexities, even if he could speak. Analogous to Bili, AI could improve the fields we are only vaguely aware of at the moment, opening doors to a house of knowledge we didn’t know existed. Naturally, we cannot state what entirely new advancements would look like exactly, as we are not super AI ourselves. Also, Bili was probably forced into those CT scans against his will – which opens a whole other door of potential discussions for another time. Regardless, we like to imagine that the super AI will grant us the help we need if we were in Bili’s situation, benefiting us at the end of the day. We just have to hope that we will have the control over the final outcome.

Overall, we believe that humanity could benefit from the concepts created by super AI, even if we cannot comprehend them.

Regardless if you agree with our optimism or not, we should all be able to agree that the technological advances made in the course of human history are impressive and there is no telling where we will go next. Many scientists and enthusiasts alike agree that the creation of superintelligent AI will come at some point in the future. There certainly are risks, and thanks to these fears many people are working tirelessly to ensure that we are moving towards a future where we stay in control. Therefore, if implemented correctly, we believe it is very possible for such AI to benefit us. Ultimately, we think that super intelligent AI will create ideas and ways of thinking that humans cannot comprehend, and we tried to highlight some of the potential. Whether it is advancements seen in theoretical physics, or for the greater good of humanity, only time will tell what superintelligent AI will do for us.

Leave a Reply

Your email address will not be published. Required fields are marked *

Human & Machine

Digital Sugar: Consequences of unethical recommender systems

Introduction We are spending more and more time online. The average internet user spends over 2 hours on social networking platforms daily. These platforms are powered by recommendation systems, complex algorithms that use machine learning to determine what content should be shown to the user based on their personal data and usage history. In the […]

Read More
Human & Machine

Robots Among Us: The Future of Human-Robot Relationships

The fast-paced evolution of social robots is leading to discussion on various aspects of our lives. In this article, we want to highlight the question: What effects might human-robot relationships have on our psychological well-being, and what are the risks and benefits involved? Humans form all sorts of relationships – with each other, animals, and […]

Read More
Human & Machine Labour & Ownership

Don’t panic: AGI may steal your coffee mug, but it’ll also make sure you have time for that coffee break.

AGI in the Future Workplace In envisioning the future of work in the era of Artificial General Intelligence (AGI), there exists apprehension among individuals regarding the potential displacement of their employment roles by AGI or AI in general. AGI is an artificial general intelligence that can be used in different fields, as it is defined […]

Read More