The pervading impression of Artificial Intelligence within the popular zeitgeist is one of careful resignation or outright fear or hate. The media has painted a neutral-to-negative picture of AI’s impact on humans, for pure entertainment purposes or as cautionary tales, with minimal representation of a positive outcome. This can be understood to have been done for interesting dramatic narratives, supported by the popularity and sales numbers of movies, TV shows and books which have AI at its center. This only facilitates the public’s apprehension towards AI’s inevitable societal integration. This article will try to paint a fair picture of what already exists, what is to come and what the presence of AI will mean for the evolution of humanity. AI is not merely another technological tool at our disposal, but a coevolving entity which needs teaching and guidance as much as we do. Which is to say, AI is the biggest ally to our species we have seen since time immemorial.
A fair outlook toward AI
Many films, shows, and books use AI as an antagonist, or a fiercely destructive force. For the sake of storytelling, it makes sense to do so: AI is still a largely unexplored frontier, and one with the potential for devastating power. However, the presentation of AI as malicious, or as a tool that’s more dangerous than helpful can steer people away from real-life applications of this technology. For the most part, depictions of AI get a lot of technical factors wrong, which leads the public to misconceptions about the technology. For starters, most stories try to humanize AI as much as possible, giving AI the appearance of having human-level consciousness and self-awareness, and sometimes subjective feelings. In reality, AI probably wouldn’t gain self-awareness and maliciously start hunting down humans; instead, our greatest risk would come from its cold, calculated decision to accomplish its task as efficiently as possible. It would only “want” to do what we programmed it to do, so its worst actions would merely be unpleasant side effects of trying too hard to achieve that goal.
“Artificial intelligence as technology isn’t morally good or bad – it just is.”
The hesitancy is good in one sense, to motivate people to think more critically about the role AI could play in our lives. For example, Elon Musk’s organization OpenAI is working hard to develop AI responsibly, and mitigate any ethical (or existential) risks that could be associated with its emergence in our world. On a panel hosted by the columnists of the website, readwrite, the host Mishra says,
“Some believe machines will replace humans, and others believe machines will merely supplement humans. After thoughtful debate, we’ve concluded that for our services business it’s not a binary choice, but rather a conscious decision on where we want to be on the Man-Machine continuum.”– Mr. Mishra M, columnist at readwrite.com
This statement pretty much clears out the confusions regarding what position AI can take within society. There are spectrums of possibilities in all facets and what choices can be made: the decision to place AI on the Man-Machine continuum is still ours to make. But, where do we put AI on the Continuum? This will be a gradual process of personal, societal, political and scientific import. People from various facets of life will have to voice their suggestions and concerns regarding where they themselves will take charge, and where AI can be allowed to.
“The public … will be able to do more than identify problems. They can contribute solutions to problems and deliberate with other citizens to craft and refine those solutions.”– beth Noveck, Director of New York University’s Governance Lab
But first, we need to realize what AI can bring to the table. It promises to help with improving the existing parts, innovative new arenas and also to provide infinite ventures. Let’s look at some facets of society where AI could have the biggest impact.
AI in Healthcare and BioTech
AI is getting increasingly sophisticated at doing what humans do, but more efficiently, more quickly and at a lower cost. The potential for both AI and robotics in healthcare is vast. Just like in our every-day lives, AI and robotics are increasingly a part of our healthcare ecosystem.
AI is already being used to detect diseases, such as cancer, more accurately and in their early stages. According to the American Cancer Society, a high proportion of mammograms yield false results, leading to 1 in 2 healthy women being told they have cancer. The use of AI is enabling review and translation of mammograms 30 times faster with 99% accuracy, reducing the need for unnecessary biopsies. The proliferation of consumer wearables and other medical devices combined with AI is also being applied to oversee early-stage heart disease, enabling doctors and other caregivers to better monitor and detect potentially life-threatening episodes at earlier, more treatable stages.
Decision Making and Diagnosis
IBM’s Watson for Health is helping healthcare organizations apply cognitive technology to unlock vast amounts of health data and power diagnosis. Watson can review and store far more medical information – every medical journal, symptom, and case study of treatment and response around the world – exponentially faster than any human.
Google’s DeepMind Health is working in partnership with clinicians, researchers and patients to solve real-world healthcare problems. The technology combines machine learning and systems neuroscience to build powerful general-purpose learning algorithms into neural networks that mimic the human brain.
Using pattern recognition to identify patients at risk of developing a condition – or seeing it deteriorate due to lifestyle, environmental, genomic, or other factors – is another area where AI is beginning to take hold in healthcare.
Research and Treatments
According to the California Biomedical Research Association, it takes an average of 12 years for a drug to travel from the research lab to the patient. Only five in 5,000 of the drugs that begin preclinical testing ever make it to human testing and just one of these five is ever approved for human usage. Furthermore, on average, it will cost a company US $359 million to develop a new drug from the research lab to the patient, studies show.
Drug research and discovery is one of the more recent applications for AI in healthcare. By directing the latest advances in AI to streamline the drug discovery and drug repurposing processes there is the potential to significantly cut both the time to market for new drugs and their costs.
Beyond scanning health records to help providers identify chronically ill individuals who may be at risk of an adverse episode, AI can help clinicians take a more comprehensive approach for disease management, better coordinate care plans and help patients to better manage and comply with their long-term treatment programs. Robots have been used in medicine for more than 30 years. They range from simple laboratory robots to highly complex surgical robots that can either aid a human surgeon or execute operations by themselves. In addition to surgery, they’re used in hospitals and labs for repetitive tasks, in rehabilitation, physical therapy and in support of those with long-term conditions.
AI and Politics
There is a clear interconnection between artificial intelligence systems and the quality of our democracies. Good governance can make the best out of technology for humans and the living environment. Bad governance will exacerbate inequality and discrimination and challenge fundamental democratic values. If such potential issues can be recognized, as they are now, AI can be taught to be egalitarian, where people can be faulty, unconsciously biased or even intentionally prejudiced against fellow humans. This can lay foundations to an evolved civilisation where we are aided by AI to look past the human cognitive limits when it comes to governance. In a similar fashion, the socio-economic landscape can be laid flatter, providing the populus with equality of opportunity, thus working up to the true ideals of a democratic system.
“Limiting malicious actors will require newly designed technology, social structures and government policies.”– Ben Shneiderman, Founder of the Human-Computer Interaction lab at the University of Maryland
A few more excellent applications of AI in politics are: It can detect any corruption made in the system instantly; With AI-enabled systems, every candidate will be analyzed by his/her past work experience, records, behavior, leadership skills which can provide the audience with a better idea before voting for any candidate. (Similar approach of AI-driven system in Digital marketing); AI can improve a system’s productivity while analyzing the loopholes in the system; If AI is implemented by independent organizations, we can remove all the fake news, false agendas within minutes, which would be highly beneficial considering the current scenario, an analysis study shown here; AI has the potential to reduce the cost of any political campaign.
AI and Education
A public school teacher grading system is a small example of wide-ranging benefits where we can use an automatic grading system, using artificial intelligence, learning from previous answers, and getting better as it goes. For example, you have heard about the famous education app BYJU that is leveraging AI for providing a better education interface.
Many universities and MOOCs platforms are using this technology to grade thousands of students and only some of them need a bit of human oversight. AI would help teachers to plan new ways of teaching for struggling students, create extra courses, read more, or simply get more time to spend on extracurricular activities.
Artificial Intelligence is being employed for personalizing learning for each student. With the employment of the hyper-personalization concept which is enabled through machine learning, the AI technology is incorporated to design a customized learning profile for each individual student and to tailor-make their training materials, taking into consideration the mode of learning preferred by the student, the student’s ability and experience on an individual basis.
All Inclusive Education
Artificial intelligence tools and devices have been aiding in making global classrooms accessible to all irrespective of their language or disabilities. These programs are all-inclusive. For instance, Presentation Translator is a free PowerPoint plug-in that develops subtitles in real-time of what the teacher is saying. This also helps aid the sick absentees as well as students requiring a different pace or level when it comes to learning or even in case they wish to understand a particular subject that is unavailable in their own school. Barriers are being torn down like never before.
Adjusting learning on the basis of the specific requirements of individual students has been the priority of educators for years, yet AI will enable a differentiation level that is highly strenuous for teachers who have to handle 30 students in every class. Many companies like Content Technologies and Carnegie Learning are presently developing intelligent instruction design and digital platforms which adopt AI for offering learning, testing, and feedback to students from pre-K to college level that offers them the challenges they are prepared for, detects knowledge gaps, and redirects to fresh topics whenever suitable.
And there are many more arenas for AI to show incredible success within the education sector: Smart Content, Voice Assisted teaching, AI-driven curriculum planning etc. and this is just the beginning to a plethora of avenues where AI and education could be fused together.
“Some people call this artificial intelligence, but the reality is this technology will enhance us. So, instead of artificial intelligence, I think we’ll augment our intelligence.”– Ginni Rometty, CEO & President, IBM
But why do many speak out against AI if AI accomplishes so many precious things?
What could possibly be wrong?
Similar to most inventions of man, AI is no exception; It is a double-edged sword. As a result, it has been blamed for today’s significant societal problems. Where do these problems come from? Innovations in statistical machine learning approaches rely on large data sets, processed with some level of autonomy, and provide probabilistically rather than deterministic results. The data used to support AI is drawn from a large number of sources, including from people who may not even be aware that the data is being collected for a purpose, regarded as data abuse; In conjunction with the lack of a well-defined policy to ensure ethical matters regarding usage of AI systems and data installs fear of uncertain future in many. How is this bad? Let us probe the primary concerns regarding the future of AI.
Concern 1: Unmatched socio-political and economic change
Wealth and power distribution: AI will change the way we organize society and the economy. In 2020, four tech companies crossed the $1 trillion market capitalization mark. They dominate the S&P 500′s total market value at an unprecedented level. The growth in the concentration of wealth within private companies came with the ownership of substantial computational processing capabilities and the accumulation of big data. It raises concerns about power and wealth residing in very few hands and widening the already existing gap between the poor and the rich. Furthermore, because of the power that comes with big data, there is apprehension about tech companies becoming more powerful than governments. Toby Walsh, a professor at UNSW Australia, describes the concern regarding wealth disparity as:
“a real fundamental problem our society facing today, which is the increasing inequality and the fact that prosperity is not being shared around,”
Not to mention the fear emanating from personal data being used in “persuasive computing.” Using sophisticated manipulation technologies, agents can steer to create needs, motivations, and condition reflexes that influence behavior, ultimately provoking the desired action. Their usage will jeopardize pillars of democracy such as autonomous decision-making and individual privacy.
Future of employment: In the beginning, the automation of simple repetitive tasks involving low-level decision-making was widely considered the future of AI. However, because of adequate computing power and the collection of massive data sets, AI has swiftly advanced in sophistication. The ability to filter and analyze large volumes of data and learn over time revolutionized various disciplines. As a result, the exponential growth of efficiencies, productivity level, and other economic advantages could result in massive job losses, mainly for those under the low-income category. This momentum of job loss could outmatch the rate at which new jobs emerge when AI systems are widely deployed. Employment overtaking will heighten the existing economic gap, aggravating the concentration of power and wealth within a few hands, potentially leading to social upheavals. “If a significant percentage of the populace loses employment, that’s going to create severe problems; We need to be thinking about ways to cope with these issues, very seriously and soon,” said Dan Weld, a professor at the University of Washington.
Competence in decision-making and responsibility for those decisions: Different parts of our daily activities are captured digitally, data is collected, and the information is processed to make decisions. The variety and volume of data and the occasional-contradictory outputs of algorithmically driven analysis are not sufficiently tended. Due to data biases, algorithms may provide inaccurate findings since they are not built to deal with the dynamic and variety of inputs. In addition, data scarcity or availability impacts the outcomes of decisions. Decision-making systems are frequently devoid of values and ethics to analyze the given data. Data quality, in addition to availability, significantly limits factors in what designers and researchers in AI systems can do when attempting to assess or address societal outcomes of algorithmic analysis. If that data set is incomplete or biased, it will result in a partial result. Due to this quality and coverage of available data, negative social repercussions such as prejudice, unjust liability, or reinforcement of undesirable social situations could become inescapable. The data itself, not the analysis, algorithms, or system architecture, is the deciding element.
Furthermore, AI’s ethical and legal implications are still being worked out; concerns about the degree to which agents that utilize and build AI are responsible and aware of unintended consequences, including the moral importance of their actions, are unaddressed. Who should be held accountable to those who may demand and deserve an explanation for what is done to them in the event of any casualties caused by the decision-making of AI systems?
Destruction: Cybercrime, autonomous weaponry, and weaponized data are all on the rise. Some believe that the fast expansion of autonomous military applications and the use of weaponized information, lies, and propaganda to dangerously destabilize human groupings will further erode existing sociopolitical systems and result in significant destruction. Some are also concerned about the ease of cybercriminals’ ability to gain access to financial systems.
Concern 2: Will the aims of Artificial General Intelligence (AGI) be the same as those of humanity?
It’s unclear whether such a technology would be sentient. However, whether conscious or not, an AGI could have incompatible goals with our own. There are suspicions that if AGI surpasses human intelligence, it will eliminate humanity intentionally / unintentionally due to deviations in aims and pursuits.
“What are the chances that such an entity would remain content to take direction from us? And how could we confidently predict the thoughts and actions of an autonomous agent that sees more deeply into the past, present, and future than we do?”– Sam Harris
How should we address these challenges and concerns?
Rather than criticizing AI systems and regarding them as bad actors, the best way forward is to address deeply embedded and reflected societal concerns so that the majority will benefit from these technologies. These benefits will not be assured unless sufficient conditions are in place. The current issues are the outcome of poorly regulated AI that disregards socioeconomic and demographic implications. We must keep up with technological breakthroughs and debate the best approaches to create AI constructively while limiting its negative aspects. Investing in interdisciplinary diversification of the field could help us better detect and address biases. Legal scholars, policymakers, government personnel, designers, and researchers directly involved in AI development and implementation should collaborate to comprehend the structure, implementations, and outcomes to advance toward more just and socially good uses of AI. AI systems that are adequately controlled and have the necessary safeguards can be a constructive force for equity.
People worldwide must find ways to arrive at common understandings – to join forces to develop widely recognized solutions to challenging issues and preserve control over complex, intelligent systems with an aim to serve humanity’s best interests. Law should be used to ensure social and ethical responsibilities by promoting transparency, accountability, and individual privacy.
There is a necessity to reorganize economic and political structures to improve human capacity and capability, improve human-AI collaboration, and halt trends that would endanger human relevance in the face of artificial intelligence—changing economic and political systems to help humanity keep pace with AI more effectively to keep the unemployment rate under control. As AI becomes more widely used, its ramifications on employment and inequality should be addressed through a social safety net. Instead of stressing the mass unemployment that an automaton could bring, we can focus on ensuring that employees, especially low-wage workers, have the skills they need to compete in an automated society.
A universal basic income (UBI) has been offered as one possible remedy to the job losses caused by automation. A UBI would give everyone a set amount of money on a monthly basis regardless of their circumstances. Advocates believe that it would not only help to end poverty but would also benefit people whose jobs have been automated, allowing them to learn new skills required in a new career or industry without having to worry about food or rent. Everyone would have a safety net to be free and creative with their choices. Living a happier and healthier life by doing what they value most. Activists in European Union countries have already taken the lead in promoting this concept.
Lastly, possible divergences in value alignment when artificial general Intelligence is achieved can be addressed by taking steps to align human goals, values, and behaviors from the early design stages of highly autonomous AI systems. Intentional value alignments must seek to prevent AI systems from inadvertently acting in ways unfavorable to human values. The alignment issue also serves as a reminder of how much more we need to learn before constructing AGI.
AI scientists and researchers are looking at a variety of approaches to overcome these obstacles and develop AI systems that assist humanity rather than harm it. Until then, we’ll have to walk carefully and be cautious about how much credence we give to these technologies. AI systems must be able to respond appropriately to underlying human values and interests; Future generations’ well-being and the environment should be taken into account.
Regulation is the way forward
Regulation for Artificial intelligence, its governance, and the associated legal arrangements are much needed. Legislation should account for the approaches that underpin the workings of AI—considering how to determine when a system is sufficiently fair to be deployed and whether or not fully automated decision-making should be allowed at all. While there are numerous current efforts to standardize, provide direction, and coordinate research and implementation, efforts towards more socially positive AI outcomes, responsibility, and accountability for those implementation outcomes are ought. Moral philosophers, sociologists, and other humanities academics must address these issues, demanding a multidisciplinary approach; A primary priority must be enhanced cross-border human collaboration in the service of upholding humanity’s best interests and affiliations with AI systems.