Each of us – especially within the AI community – saw claims from famous people like Elon Musk or Stephen Hawking that AI is a big threat to humanity. When you hear this for the first time, you really become nervous and suspicious about recent AI developments showing how powerful autopilots, chatbots, weather forecasters, and other problem-solvers have become. But right after diving into the real state of affairs, one realizes that hype around AI is enormous and most of the claims are no more than speculations. As it stands, It is not possible to know whether AI will dominate the world, but it is clear that currently, AI is weak, narrow and completely safe by itself as it cannot function without human involvement. In this article, we will thus discuss a different perspective compared to what has been seen through the course, showing that AI is not as bad as portrayed by media. We will touch on the concept of Artificial General Intelligence, or AGI, and discuss in more details why AGI may not be reached for a long time as well as arguments that encourage us in thinking that, even if reached, AGI will not be dangerous for humanity.
We will start from AGI development – if it is reachable and what are the potential consequences. Next, we will address the common concern of how AI will replace jobs before then moving onto the usage of our private data by AI systems. We will discuss how bad and evil that really is, before then bringing about a conclusion and thus summarizing our opinion concisely.
Part I: AGI
In the field of AI, the concept defined earlier as Artificial General Intelligence, or AGI, is also referred to in layman’s terms as the “technological singularity”. Several questions can be asked in that aspect. Whether AGI is really beneficial for humanity. What the advantages of reaching AGI are? What are the disadvantages? One point-of-view is that AGI would open up possibilities that are currently too hard to set up, as we will discuss a little later. Another common viewpoint is however that the advent of AGI would bring about a dystopian world as AIs rise to independence after overcoming human limitations, as shown in movies such as “Her” or “Ex Machina”. Here, we will see whether AGI would be beneficial to humanity, but also precisely what currently are the blocking points for AGI.
AGI is technically defined as the point when an artificial intelligence agent becomes able to do anything a human could do. This includes having a conversation, reading books, but even writing them. The economist however also confirms that to speak about AGI in the dawn of the 21st century is tantamount to talking about landing on the moon in Ancient Greece era. Sheer lunacy. A child’s dream. Well although that may be so currently, steps have been taken by companies such as DeepMind, OpenAI, FAIR, Google Brain and others, to work on increasing the possibilities of Machine Learning (ML) systems. In recent years, AI systems have beaten world professionals in chess, go, poker, and in complex, partially observable game environments such as DotA2 and StarCraft II. The development team at SalesForce created a simulated world to design optimal tax policies using AI, and the same kind of AI is now being explored for healthcare applications. This is a fantastic insight into the future,
AI Economist, using AI to design an equal and productive tax system
where ML systems can help humanity in so many complex fields by producing super-human results. That being said, although these AIs are all very skilled, they are currently crucially lacking generalization and have trouble transferring what has been learned to other environments. Looking at the progress of ML systems in recent years, and even more so at the progress of Reinforcement Learning (RL) systems (used in all the aforementioned references), it is safe to say that we are more than 15 years away from having relevant and generalizable AI systems.
Part II: AGI and a job market
Of course, one cannot talk about the future of the world and of AI without mentioning less intelligent systems. Automation has already had an impact on society, with statistics from MIT showing that robots in populated areas remove on average 6 jobs. That is a valid, founded concern. From the Future of Jobs Report 2020 we may notice that it is expected that by 2025, 85 million jobs may be displaced while 97 million new roles will open due to automation. We see that there is no exact image of the future of a job market.
It is clear that monotonous, dirty, dangerous, and demeaning jobs will be replaced first. The obvious reason for that is that for these positions there is a business demand – it becomes more efficient to invest in automation instead of paying high compensation for the unattractive position. We believe that by progressively removing manual labor by using “hard” automation and hopefully removing more complex tasks through AI systems, humanity will progressively be freed to do more interesting things.
It also needs to be mentioned that common things such as factory worker strikes and strikes in other manual labor jobs would be completely nullified by the advent of AI, if we consider that all manual labor is to be replaced by hard automation.
In the best case scenario, concepts such as Universal Basic Income could be studied more thoroughly, and maybe even applied afterwards. In the worst case scenario, this will lead to a new industrial revolution, similar to the advent of internet creating jobs in online and technical support. Regardless, saying that AI steals our jobs and offering to freeze developments in order to artificially keep jobs is nothing more than supporting the degradation of society.
Through this part, we have seen how AGI can be reached, and what it can potentially bring us. We have also discussed how AI systems can influence the job market, and what the two opposite ends of the spectrum are like regarding jobs and AI. As humanity progressed, it has seen no shortage of technological revolutions and although the passages were never smooth sailing, change eventually happened for the betterment of society. We as AI experts strongly believe that AGI and AI progress, in general, is just one more milestone for humanity, and not the doom bringer that some media make it out to be.
Overall, even removing job places, AI makes new, more creative professions more demanded and available. What is more valuable, labour becomes safer and more efficient.
Part III: AI and private data
One may remember about Cambridge Analytica scandal during which an AI algorithm violated voting procedure using stolen private data of millions of Facebook users. This is an undoubted fact that big companies like Google and Facebook collect tons of our private data. Apart from the usage of this data for marketing, individual profiles may be built from the collected data based on people’s political views, preferences, characteristics, and activities. These profiles, in turn, are used to target people with news, political advertising, and other content meant to influence or alter their perspectives relative to specific candidates, political parties, or beliefs. For all these purposes AI is used as a tool for profile analysis and targeting. However, AI doesn’t collect and exploit the data on its own as it doesn’t have the ability and reason for that. All decisions regarding data harvesting, circumvention of laws on the use of personal data, placement of aggressive political advertising lie on the shoulders of companies that benefit tremendously from these actions.
Data leakage is frequently caused by the users themselves. Research reveals that more than a quarter of respondents who use smart devices do not read terms and conditions before installing application software and about twelve and a half percent do not have any idea about this. This also may be counted as a data leakage because these terms and conditions are always trying to hide the most sensitive points under terms poorly understood by ordinary users. Again, the data obtained by trickery may be used to train AI algorithms to exploit people. But in such a case AI is just a tool while all the decisions, violations of privacy rights are performed by people.
Finally, personal information sharing on the web is still on our own choice (not to mention the cases of public surveillance). Nobody obliges us to publish our photos, locations, connections, wishes, and other information on the internet. Therefore, the next time you are going to say that AI is evil and uses your information against you, think if you provided it yourself.
Data privacy is a very hot topic nowadays. Not only bit tech companies should be audited and regulated but also the users themselves should be smart with the data. From AI experts position, we claim AI doesn’t use this data (especially, against us) until people decide to do it intentionally. The strategy of this data usage is on the companies collecting and buying this data.
Having covered AGI development, the impact of AI on a job market, and AI position during data privacy scandals, we think we showed that AI didn’t act evil. In most cases, the impact is exaggerated for commercial purposes, while from the scientific perspective nothing bad or crucial was invented. We have to accept AI as a tool without biasing the expectations. AI is not as simple as it was 50 years ago. But it’s far from being as clever as human beings as well as completely independent. We can see that AI may be used for good purposes such as medical diagnosis, entertainment, science, as well as for advertising, political targeting, weak points finding. Society has to understand that this is just another (even a powerful) technology making our lives better. Having that in mind, general public and especially media will stop hyping that. There currently is a non-negligible possibility that our over expectation will lead to a new AI winter and will deprive us of all its achievements and advantages.