Strong AI requires a strong vision: Why AGI needs to be open source

With the release of the new AI chatbot ‘ChatGPT’ by OpenAI, it became clear that something had changed. Not necessarily in the core technology behind the chatbot, but more so in the societal impact that such an AI tool can have. The tool brings about low effort access to an advanced text generation system for a large public. The impact on society became very clear in education, where due to the capabilities of the chatbot, teachers encounter the problem of indistinction between what is written by their students and what is not.

After the release of the chatbot, Microsoft decided to invest billions of dollars into Open-AI. With such an investment, it became clear that the battle for state-of-the-art AI systems was going to be a goldrush-like brawl for power.

This may imply the emergence of larger problems: undemocratically chosen organizations with a large societal impact obtaining the power over cutting edge technology. We have seen this gone wrong in the Cambridge Analytica Scandal.

With ‘just’ a text-model having such a vast impact on society already, one could only imagine the impact of more advanced types of AI on society.

Artificial General Intelligence (AGI)

AGI or strong AI is the concept of machine intelligence being indistinguishable from human intelligence or following other definitions: equal to or exceeding human capabilities. Generally accepted is that this would also imply an AI system being able to perform multiple tasks and having the ability to teach itself. This means that it is a concept that is hard to measure, as some of the core concepts defining intelligence and the philosophy behind it are not well defined. One well-accepted idea is that strong AI would need to pass a Turing test to be called strong AI. It is interesting to see that it is already up for debate if ChatGPT could pass a real Turing Test. A common remark is that passing the Turing test is not the same as actual intelligence.

AGI could be a disruptive technology, possibly even posing an existential threat to humanity.

When thinking about how fast developments in AI go, it might be time to rethink how we shape the development of AGI. Do we really feel it is appropriate to put the power over these technologies in the hands of big tech? Or should we reconsider? It seems to be time to regulate AI and the usage of it, to determine the direction we want to take these innovations of the future.

An issue when talking about the development of AGI is that it is hard to distinguish the development towards more general intelligence as a separate category of AI development, as AGI itself does not exist yet. Where do we draw the line? Is ChatGPT possibly helping the development of AGI or not? And how do we even go about creating AGI at all? Some say that collaboration and the connection between different AI systems is the way to go about it. The categorization of what kind of AI applications are then helping in the development of strong AI may be up to policymakers.

One step forward in regulating AI and its usage is the so-called AI-Act (AIA) as proposed by the EU. In this act, rules for trustworthy and safe AI are proposed, for different categories of risk that the technology imposes. However, the act has its limitations. For example, it could be harming open-source development of certain AI technologies if not properly implemented, thus possibly empowering big-tech.

This direction that the EU sets is a good starting point, but these issues do not stop at borders. It is necessary to think about broad international and intercontinental standards for safe and trustworthy development towards a possible future of AGI.

Therefore, we argue that all development of and towards AGI should be open source. 

Market Power

Studies show that large technology corporations of this day and age have an increasing share in total market capitalization. The growing amount of resources form a foundation for large technology companies to invest in AI. Over the past ten years, there has been a shift in the development of cutting edge AI systems from academia to industry, as seen in the graph below.

Source: https://ourworldindata.org/grapher/affiliation-researchers-building-artificial-intelligence-systems-all?country=~Affiliation

This could form an issue when these companies misuse their significant societal impact and power attained through their platforms. As these platforms become almost too big to regulate, it is an important issue to address how society can change this position into a healthier market where competition can thrive. As AGI development will almost certainly be of importance for big tech platforms in the future, the democratization of development through open sourcing could open up opportunities and naturally decrease the market power of these companies. 

Mandating these technologies to be open source also provides a lower threshold for both businesses and individuals to expand on their innovative ideas. The availability of AGI can provide substantial economic savings for small start-ups as compared to keeping the tools proprietary. These savings can be invested in the exploration of new ethical and technological areas of artificial intelligence. The prime example of an open source project in which public collaboration enhanced the product, is the development of the Linux kernel.

Adversaries of open source regulation argue that this may discourage investment in AGI software development. However, they tend to oversee the fact that the amount of developers for an open source project is huge, only being limited by the countless developers across the globe. The attention and transparency that is caused by mandating these AGI technologies to be open source, can in turn be the source of cross-industry innovation. 

Of course, one could argue that this also has the potential to stifle innovation: if your budget for developing advanced technology will lead to opportunities for smaller newcomers to piggy-back on the investment, you will think twice if it is a profitable idea. However, this could be circumvented by limiting the re-usage of these technologies via licensing per given situation. The cooling-down of innovation in this realm could also be a good thing as it gives society time to slowly adapt to the idea of living together with this type of AI.

Policy and Society

The concepts of ‘trustworthiness’ and ‘transparency’ are of large importance in the development of the AIA. If all AGI would be open source and completely accessible to policymakers, we can speak of a situation of information symmetry. Compliance issues could easily be addressed and auditing is less intensive. This in itself can lead to new policies being developed in a more effective way than when information is kept behind the curtains of large corporations. This access to information will possibly never be given by the companies themselves, as sharing valuable information is not in line with their interest of generating profit and attaining power.

In summary: it can provide a self-reinforcing mechanism for the future of AGI policies.

Although it may feel far-fetched to think in radically mandating all AGI to be open source, the benefits for society seem to outweigh the disadvantages. Less market power for big-tech, more transparency for policymakers and a possibly thriving innovative market towards creating AGI. It is up to policymakers how to implement this idea, but surely: some things are too important to just let the market decide. Now is the time to consider if AGI development falls in this category.

Leave a Reply

Your email address will not be published. Required fields are marked *

Human & Machine Power & Democracy

AI and Personal Privacy: Navigating the Fine Line Between Convenience and Surveillance

As Artificial Intelligence continues to evolve, it is integrated into almost every aspect of our lives, bringing a new level of convenience and efficiency. From smart assistants and chatbots that perform a range of tasks on our command to facial recognition software and predictive policing, AI has undoubtedly made our lives easier. But, this convenience […]

Read More
Power & Democracy Power & Inequality

Navigating the AI Era: The Imperative for Internet Digital IDs

The rapid advancement of Artificial Intelligence (AI) presents a dual-edged sword, offering unprecedented opportunities while introducing complex challenges, particularly in the realm of digital security. At the heart of these challenges is the pressing need for effective internet identification systems capable of distinguishing between human and AI interactions. We will explore the vital importance of […]

Read More
Power & Democracy Power & Inequality

Data privacy: Why it should be the next step in AI regulation

In the current stage of development in Artificial Intelligence (AI), there is nothing more important than data. It’s the fuel of any statistical-based AI method. The most popular classes of models ingest enormous amounts of data to be trained, such as ChatGPT, Google Bard, PaLM. However, in many models, the users do not explicitly give […]

Read More