INTRO
Recent developments in AI have seen subfields of this technology, such as generative AI and deep learning, explode in terms of popularity, innovation, and investments. These advancements are happening at such a rapid pace that it is nowadays difficult to imagine a field in which some forms of AI cannot or will not be implemented. Generative AI systems such as image, video, and text-generating models lower the barrier of entry for people worldwide who are, for example, trying to start a creative career, whereas deep learning systems allow us to infer complex relations from mountains of data to train the cars of the future to drive completely independently. These systems have quickly become a vital part of our daily lives, and more importantly of our economies. On the other hand, implementations that, intentionally or not, hurt certain groups of populations have started to pop up, and have seen an accelerated rise as well following the overall trend of large-scale AI implementations. In this article, we argue that these downsides are a necessary evil that we must live with in a time of mass globalization, in which these implementations of AI technology can drastically change the lives of people and the course of history. We believe that AI should not be regulated for the time being.
BACKGROUND ON REGULATIONS BEING PUT IN PLACE
It is fair to say that the last couple of years have seen their fair share of negative reports on AI. Several examples that come to mind are COMPAS, a judiciary tool put in place in the US to counter recidivism when judging offenders. This system systematically discriminated against people of African American backgrounds due to ethnicity being one of the data labels used for training. Even after removing this label, the system managed to “relearn” its biases by discriminating based on an offender’s address. On the other hand, implementations of AI for military purposes have seen questions arise regarding the fact that the ability to potentially take lives is given to an autonomous system with (potentially) no humans in the loop. These events, among many others which touch upon issues of copyright infringement, job loss, social issues, and way more, have led the US and the EU to look at legislation for AI systems which could severely hamper innovations and advancements. One example of such regulations is the new AI Act, the first of its kind in the world, which will be applied at EU level.
REGULATIONS HAMPER INNOVATION
We have identified various problems with regulating AI starting with hampering innovation: a “virtual arms race” was conceptually started at the point of AI’s inception. Its near-limitless use cases mean that governments and other systems of power will always look at ways of implementing AI for reasons of maintaining and extending power. In this sense, AI can be seen as the natural evolution of technology that can be used to defend oneself (or attack others), being conceptually no different than the firearm, the warplane, or the atomic bomb. By restricting our innovation, we allow others to surpass us which leads to us automatically losing the battle. The opposite, in the worst case, leads to a stalemate, and in the best case leads to a powerful safeguard. Implementing the European AI Act could mean that Europe will be surpassed by both the United States and Asia (particularly China) which would influence the geopolitical landscape negatively. Two superpowers will be on opposite sides, a landscape we have already seen during the Cold war, with Europe not having any influence at all.
Those who do not agree with our claim, the people who believe AI should be regulated, would insist that not regulating the AI weapon industry will inevitably lead to significant security risks. We will not argue against this claim, but rather consider it from another point of view. If we in Europe decide to not invest in the AI weapon industry, we will become weak in the eyes of our opponents, which would put us in more danger. Developing AI weapons does not necessarily mean we will use them, but they will likely act as a deterrent.
LOSS OF BUSINESS
Moving away from the doom and gloom, AI has revitalized the tech sector and has transformed historically significant places such as Silicon Valley in a metaphorical “second coming” of technology. Regulations can and will severely hamper these companies from innovating, therefore decreasing their bottom line. The harsh reality is that the bottom line drives these companies and that they will not hesitate to move to locations with less to no restrictions. This can ironically lead to more job loss and loss of knowledge and technology.
A much heard argument in support of regulating AI is that AI will take over jobs which would lead to huge public unrest. This argument is, however, short-sighted. In the short-term jobs will likely disappear, but in the long-term numerous jobs will be created because of the use of AI. The creation of new jobs is not the only benefit of using AI since, let’s face it, some jobs are better suited to be performed by AI. An everyday example is the smart car, here the chance that the driver of a car makes a mistake is minimized considerably. More specifically related to employment, AI-driven robot surgeons can outperform human surgeons in many tasks. This ensures higher rates of success in surgeries and thus a lower number of deaths due to mistakes made by surgeons. AI-driven robot surgeons are just one example of many jobs where humans are outperformed by AI, and where AI ensures a better and safer option. We as humans need to accept that AI systems can outperform us at far lower error rates, and need to accept that we need to put aside our distrust and biases.
The biggest pitfall of regulating AI is that in almost all cases it concerns ex-post enforcement, meaning it is a reaction to an action. In the case of AI, this is a huge pitfall as once an innovative technology is created that could be used for the benefit of certain companies it will become difficult to prohibit. Also, AI is constantly evolving, taking on new shapes, and would need regulations that can reshape accordingly. This type of regulation is unfortunately not present yet and therefore the ex-post regulations we have now would only hamper the development of AI.
The biggest pitfall of regulating AI is that in almost all cases it concerns ex-post enforcement, meaning it is a reaction to an action. In the case of AI, this is a huge pitfall as once an innovative technology is created that could be used for the benefit of certain companies it will become difficult to prohibit. Also, AI is constantly evolving, taking on new shapes, and would need regulations that can reshape accordingly. This type of regulation is unfortunately not present yet and therefore the ex-post regulations we have now would only hamper the development of AI.
Conclusion
Fear of the unknown is one of the biggest obstacles when it comes to progress. We are living in an age with unprecedented opportunities due to new advancements in AI. Instead of embracing these opportunities we are rapidly moving towards an increased number of regulations, the new AI Act in the European Union being one example. We believe that AI should not be regulated for the time being. AI is progressing at an exponential pace with constant innovations and advancements. Regulating too soon will have an immensely negative impact on the development of AI and the market. We should rather wait until the growth of AI will start stagnating, only at that point do we know what we are dealing with. It is easier to regulate something known than to regulate the unknown.
In the article we have tried to show how regulating AI at this point would do more harm than good. The AI Act in the European Union is the first regulation dealing specifically with AI. This could mean that Europe will be surpassed by other countries, which in turn could lead to a radically different geopolitical landscape. We have also shown that AI will not lead to more job loss, but rather to more job opportunities, an argument that is often used by proponents of AI regulations. Having AI take over certain jobs could also decrease human fatalities. In the pursuit of a future defined by unleashed intelligence, let us not be constrained by the shadows of uncertainty but rather step boldly into the boundless potential that AI holds for the betterment of humanity.