Any system that has any form of Artificial Intelligence in its composition should pass several layers of legal and technical regulations

Artificial intelligence is becoming more and more pervasive and advanced. A lot of new impressive AIs have been published recently, such as ChatGPT, DALLE and Prime Voice AI. All these systems have one thing in common: they try to mimic human intelligence and in turn produce works that are as good as human creations or even better. As it stands today, AI systems are being released into the public without thinking about potential consequences first. It seems that the focus is on the technological advancements while ethical issues are taking a back seat. To combat this, new regulations should be in place.

Negative consequences of AI deployment without regulation

So what can go wrong if we let artificial intelligence systems run unchecked? We can look at problems that are currently occurring. Take, for instance, Prime Voice AI, a system that can copy any human voice and then recreate it almost perfectly. It can, and already is, very easily misused by making famous people and world leaders read out text that they would not say, for example, by making US president Biden announce an invasion of Russia. Russian government is probably aware that it’s fake, however there are people who are not yet aware that it can be faked so realistically. Also, there are groups of people who simply don’t now any better, like children. Their parents voices can be faked and that can have dire consequences.

According to following resource, Daron Acemoğlu, a Professor of Applied Economics at the MIT Sloan School of Management, the rapid development of artificial intelligence (AI) over the last decade has raised various unprecedented social consequences. Despite the promises of transforming economies and enhancing capabilities, AI’s current dominant paradigm of statistical pattern recognition and big data use has resulted in negative impacts in several areas.

One of the main dangers is in product markets and advertising, where leading firms’ control over data and its use has resulted in price discrimination and manipulation of consumer behavior. For example, companies can harvest information about customers to engage in price discrimination, resulting in damage to consumer welfare. They can also use their superior knowledge to manipulate consumer behavior, such as companies estimating “prime vulnerability moments” and advertising for products that tend to be purchased impulsively during such moments.

In a plethora of industries, the deployment of automation technologies, including AI, has contributed to the increase in inequality and displacement of low and middle-skill workers. The acceleration of AI since 2016 has had similar effects to other automation technologies and is likely to exacerbate inequality trends in advanced economies.

Existing and proposed regulations

Europe

The already existing regulations are usually about data and data privacy rather than AI itself. However, there’s an ongoing discussion on regulations for AI. European Union have proposed laws on artificial intelligence, in the so called Artificial Intelligence Act. In this act, AI is categorized based on risk level: an unacceptable risk (banned AI, such as social scoring used in China), a high risk (such as job candidate ranking system), and low or minimal risk (the rest of AI, left unregulated). There seems to be only pros on having regulations for AI, but what about the cons? One of the trade-offs for having regulations in technology sector is stifled innovation. This however, does not have to be the case, as there are ways to mitigate that. EU handles this problem by introducing something called Regulatory Sandbox. Regulatory Sandbox is a testing environment created for a short period of time to test novel technologies following the plan agreed with the competent authorities. These sandboxes hopefully will strike a balance between innovation and security. As previously seen with the EUs GDPR laws, which were also implemented outside of the EU, the same could happen with these new proposed laws on artificial intelligence. Although in countries like USA, AI regulation faces a fierce opposition.

The idea that artificial intelligence should be governed is not universally accepted. The main argument is that AI is fundamentally different from everything we’ve seen so far and these regulations will face a lot of technological challenges. It is not easy to control something that you cannot see. In Artificial Intelligence everything happens behind the scenes, as it consists of software and data. AI is becoming so complex, that scientists do not fully understand how it works anymore. While this is true, there will be (AI based) tools in the future that will help to make sense of this mess. Another claim against AI regulations is that strict laws will not be attractive to AI providers and they might decide not to enter EU market. However, if that is the case, then maybe EU won’t want to have these AI providers in the first place.

United States

The United States has been facing criticism for its approach to regulate AI. Despite efforts at state, federal, and international levels, the U.S. has yet to develop a comprehensive and cohesive approach to regulate AI. There have been attempts in the past, such as the Algorithmic Accountability Act introduced in 2019, which aimed to hold companies accountable for their automated decision-making systems but this bill did not pass. The California Consumer Privacy Act of 2018 granted consumers the right to request information about personal data collected by businesses and how it is used.

In 2023, new regulations are expected to come into effect, requiring companies to be more transparent about the usage or development of their artificially intelligent tools. This will include the development of policies, creation of governance and accountability structures, preparation to communicate about AI systems, and conducting risk assessments. However, some argue that these efforts are not enough and there is a growing need for a more comprehensive and coordinated approach to regulating AI to ensure that it is used in a responsible and ethical manner.

Conclusion

The rapid development of artificial intelligence has raised various unprecedented social consequences, with the focus often being on technological advancements while ethical issues are neglected.

While holding the potential to transform many industries and improve our lives, unethical use of AI systems may also allow for negative consequences such as manipulation of consumer behavior, exacerbation of inequality, displacement of workers and arguably the destruction of human society. This is why it is so important to have laws and regulations concerning AI now, while it’s still not too late.

Leave a Reply

Your email address will not be published. Required fields are marked *

Human & Machine Power & Democracy

AI and Personal Privacy: Navigating the Fine Line Between Convenience and Surveillance

As Artificial Intelligence continues to evolve, it is integrated into almost every aspect of our lives, bringing a new level of convenience and efficiency. From smart assistants and chatbots that perform a range of tasks on our command to facial recognition software and predictive policing, AI has undoubtedly made our lives easier. But, this convenience […]

Read More
Power & Democracy Power & Inequality

Navigating the AI Era: The Imperative for Internet Digital IDs

The rapid advancement of Artificial Intelligence (AI) presents a dual-edged sword, offering unprecedented opportunities while introducing complex challenges, particularly in the realm of digital security. At the heart of these challenges is the pressing need for effective internet identification systems capable of distinguishing between human and AI interactions. We will explore the vital importance of […]

Read More
Power & Democracy Power & Inequality

Data privacy: Why it should be the next step in AI regulation

In the current stage of development in Artificial Intelligence (AI), there is nothing more important than data. It’s the fuel of any statistical-based AI method. The most popular classes of models ingest enormous amounts of data to be trained, such as ChatGPT, Google Bard, PaLM. However, in many models, the users do not explicitly give […]

Read More