Depending on the choices humanity makes, Artificial Intelligence (AI) could become the greatest asset or our eventual undoing. To secure the human position as the parent of AI, a set of conditions must be implemented. These regulations need to address both the technological and societal aspects. Regarding the technical aspects of development, regulations should address the guarantee of the AI system design. From a societal perspective, the law must encompass a range of ethical concerns to avoid its misuse. As technology evolves, the course of action to implement an innovation should evolve parallelly. However, it is with great caution to ensure that the AI system never endangers the creator.
Where do we stand from a technological aspect?
As presented above, to certify protection for the creator, the evolution of the AI system can be limited in two ways. In this part, the focus is directed to the technical outlook and how to maintain control over our models. The first method entails limiting the design of the AI system, providing certainty that no harm can be directed to the creator. The second point involves “opening” the BlackBox that is currently in our AI systems, allowing the ability to understand how decisions are constructed and possible reactions in particular cases.
Limiting the design might represent a solution.
As stated above one possible solution to prevent AI from harming humans would be to limit its design. By doing so, we can ensure that even if something were to happen and it would try to harm us, it would be impossible. This could represent one of the best and the safest methods to protect us from our creation. At the moment, we can use two different methods to limit the capabilities of the AI.
The first method would be to implement a kill-switch. Although the latter seems relatively simple, it could be proven as the most effective. This could represent our last resort in the worst-case scenario. The Facebook AI experiment demonstrates an example in which the kill-switch was used. To their advantage, the two AI robots initiated a conversation in a language in which was impossible for the scientist to decipher. Due to this weird behavior, the scientist decided to terminate the experiment before a further loss of control. However, the solution is not that simple and may lead to multiple problems. The first problem is that kill-switches could be bypassed and diable, rendering them useless. As time goes on, and AI systems become increasingly intelligent, there is a possibility that the kill-switch will be pointless as it will be avoided, with ease, by AI. The second problem that we might encounter is based on an ethical dilemma. This implies that as AI will continue to evolve, it is possible that, eventually, it will develop consciousness. After that point, the existence of the kill-switch will still be mandatory.
The second method would be to limit the design and the hardware of the robot that the AI will inhabit. This way we can ensure that it will be almost impossible for the AI to commit any crimes as it will not possess the necessary capabilities. However, those kinds of limitations could also represent a double-edged sword: it would produce more harm than good. The main reason why the limitations would be a problem is that they are not perfect and the AI system could still harm humans even with the limitations in place. The second reason is related to the economical implication that some of those restrictions would bring to the companies that are manufacturing them. The last problem is similar to the ethical problem presented above. Although, if an AI has consciousness with a limited design, this creation could be considered as the equivalent of a person with a handicap thus leading to discriminations between different types of AI systems.
What is inside the BlackBox?
Contemporary artificial intelligence technology is mainly developed as neural networks. This is a mathematical model that mimics the neural network in the human brain. Even though our brains are much more powerful than the mathematical model, the neural network is still a striking tool that can mathematically represent any continuous functions. It means that any continuous function could be solved approximately by the neural networks with certain layers. This is called Universal Approximation Theorem. This theorem states the potential capability of neural networks as an intelligence substitute for human brains. However, although we can prove that the theorem is correct, scientists still cannot figure out how the neural network structures approximated functions, thus creating the BlackBox element. We know what we are going to input and see the output, but we cannot know what is going on inside the neural network’s computation. Because of those aspects, it will be impossible to determine responses for particular scenarios.
Some people may ask why we need to open the black box since it can give the results we want. It is not surprising to have such a question, but it is indeed a kind of contempt for the real-world problem, that is, if we don’t know what the neural network is doing, we will never know what it’s doing wrong. Then, there is no way to correct their mistakes. As engineers and scientists, we must know what happened in the black box so that even if the neural network leads to serious consequences, such as the death of users, we can also know how to avoid the increase in numbers. If we fully understand what is going on in the black box, we can better convince people to accept AI technology. In addition, understanding how neural networks learn is also helpful to improve humans’ knowledge of themselves, and we can better understand how we humans learn new knowledge. It is a challenge that we faced today.
In order to open the black box, we need not only mathematical explanations but also various supporting experiments to ensure the accuracy of neural network prediction results. In the usual sense, the research institute will undoubtedly need the support of mathematical theory to test the accuracy of an algorithm, but the problems of neural network models are more in the engineering sense. It means that to test the accuracy of the prediction results of the neural network model, we need to use different instances to test the model, and even do some complex cross-experiments to verify the accuracy of the tested neural network model. If the error of the test result is within an acceptable range, we can regard it as a safe and effective neural network model.
The impact of AI on our society.
Up until this point, the main idea was about the rules that we have to impose from a technological standpoint, but those are not the only restrictions that we can or should apply. One of the most influential aspects of AI systems will be society itself, given the fact that this will be the setting in which the majority of AI will be used. Due to this, a new set of restrictions must be created to take into consideration the presence of AI in our lives. One of the most important aspects that we must think about is the impact on our legal system and how we should handle that. This aspect should be analyzed from both sides, from the AI side, but also from the human perspective.
Will our laws resist the AI revolution?
As mentioned before, society should be prepared for the AI revolution that is more than sure to come. One way to embrace the change is by modifying our legal system which would include the AI more like a person and less as a machine. The best approach for this would be to create a separate set of rules for AI systems and humans. There are already a few examples of laws that could be available only for AI. However, those laws are mainly pure science fiction, and we, as humans, have no guarantee that if we were to apply them we would get the expected result. Some of the most popular laws that we know about are from the movie I, Robot. In the movie the AI is required to follow three basic rules also called “Three Laws of Robotics” and it can never disobey them:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The implementation of such rules could be the best chance to create a future where AI will not represent any danger to our wellbeing and would have self-preservation. Yet, with every solution, a variety of problems emerges. In this case, the biggest problem is the limit of the human imagination and the creativity of the AI systems. This problem is depicted in the movie where the AI goes rogue and determines that the best way to protect humans would be to control humanity to prevent their eventual doom. However, this kind of scenario is not only reserved for science fiction movies. Recently, rooted from the same nature on a smaller scale, a robot in a laboratory was tasked to walk while touching the ground with the least amount of legs. The AI discovered that it could walk with no legs by flipping over and walking on its knees. This experiment proves that although we try to create the perfect rules, there is always the possibility that not everything was taken into account thus resulting in the appearance of mistakes.
The other set of laws that must be created and imposed will be mainly targeted towards humans and their interactions with the AI systems. Those types of restrictions must be carefully analyzed and considered before being implemented. At present, a major problem with our legal system, as stated by Koby Leins in AI: It’s time for the law to respond, is that “The law is always behind technology but given the sweeping changes heralded by new technologies, legislators need to get in front of the issue”. Society should further consider the implication that AI will have on our future and start to establish more defined laws as it seems AI has the upper-hand. Unlike the flaws of human nature, the robot is unable to deviate from its initial program. Thus, the likelihood of a human breaking the law is incomparably greater than a predetermined robot.
Lastly, another important social aspect that would be greatly beneficial in the fight to prevent the AI from going rogue would be the implementation of AI standards. The creation of such a standard can regulate how AI works and the decisions that it will carry out. In addition, it could prevent the implementation of any kind of malicious software that would determine the AI to go rogue. Consequently, the regulation that we try to implement to protect us, could backfire and hurt us additionally instead. This could happen if there is a mistake in the implemented regulation which would cause the AIs to go rogue in a particular case. If something similar were to happen, probably all the AIs would go rogue simultaneously since all received the same regulation with the same mistake. Furthermore, the regulations would create an impediment in future research in the AI field as researchers will lack the freedom needed to test out their theories.
When the Black box breaks the law
Technology itself is innocent, but it is inevitable that some people will use technology to commit crimes. What we can do is to avoid technical deficiencies as much as possible, so as not to allow criminals to take advantage of them, and to bring better and safer technical services to the general public. In addition, artificial intelligence technology is not as scary as other technologies. It will not become a “other intelligent life” that threatens mankind, but it is a very effective tool to help mankind live better, if we restrict technology in reasonable way.