Who will stand behind AI when we can no longer be sure of the intent behind the algorithms? Transparency, by definition, is the ease to perceive or detect something, whereas, in the context of Artificial Intelligence, it is referred to as explainable AI, this relates to humans being able to perceive the amount of testing that the models have gone through and therefore enable them to infer the reasoning for decisions that have been made. Transparency can also allow for monitoring and updates to be made known and consequently, the lack of transparency becomes one of the main issues with progressing AI. Solving this issue would be extremely beneficial since it shines through most AI issues currently, such as explainability, accountability, bias, and regulation.
Bias and Discrimination
To dive into this issue of modern-day technology, a general understanding of the term ‘Machine Learning’ is needed. Machine Learning (ML) is a field of Artificial Intelligence that focuses on the development of algorithms and statistical models that allow for the recognition of patterns in data sets. Let’s say a certain company aims on building a face recognition tool that can recognize whether a face is either male or female. To accomplish this, the company uses a large training data set of pictures of humans, paired with an indication of the sex of the person in the picture. By feeding the training data set to the algorithm, the algorithm starts to update its ‘judgment’ by recognizing patterns in the data paired to the sex indication. A human projection of such a pattern could be that men have bigger noses than women. This would be a human projection of the pattern, as such patterns are mathematically gained from the data and are therefore very often not easily explained in words or even understood by humans at all.
This leads us to the actual reason for data bias and discrimination being such an issue in modern-day AI. In 2018, Amazon’s use of an AI tool to discriminate in reviewing job applicants’ resumes gained global recognition. As a training data set Amazon used previous applicants’ resumes paired with a ‘pass’ or ‘fail’ review by human reviewers in the HR department of the company. This strategy resulted in the AI tool discriminating against women in technical jobs because humans have proven to tend to make the same mistake. The algorithm picked this up, updated its ‘judgment’ based on biased data, and used this for two years straight until someone noticed.
Bias and Transparency
To provide insight into the bias and discrimination problem with respect to transparency, a reflection on the previously mentioned Amazon example could prove beneficial. The reason for this large-scale discrimination happening is indirectly linked to something experts call ‘the black box of AI‘ referring to the unknown factors (or patterns) a certain tool is basing its decision-making on. If this tool would have been (or could have been) asked to give a reason for not providing a ‘pass’ review to a certain woman’s resume, the answer would have been short and the tool would not even pass the test phase. This implies that the current issue surrounding AI can be traced back to the central challenge of transparency and could be solved largely by tackling transparency.
Lack of Accountability
The accountability of Artificial Intelligence (AI) has become a pressing matter as its use and sophistication escalate. Despite AI’s capacity to revolutionize industries and enhance people’s lives, its proliferation also sparks ethical and legal debates about the responsibility for its actions and decisions. One of the paramount dilemmas with accountability in AI is the ambiguity surrounding who is accountable for its decisions. AI algorithms can be intricate and inscrutable, making it challenging to determine the accountable parties for their development and implementation. Additionally, numerous AI systems are autonomous, taking decisions without human intervention, giving rise to questions of who should hold responsibility for these decisions and how to guarantee they are impartial and fair. Moreover, the consequences of AI accountability can be extensive and long-lasting. For instance, a criminal justice system’s AI that is discriminatory towards specific demographics could result in wrongful arrests or convictions, having detrimental effects for years to come. Hence, ensuring accountability in AI is imperative.
The Problem and Transparency
Although transparency is relevant for understanding the way in which AI makes decisions, it also raises questions about roles in accountability. If an AI system produces biased or discriminatory results, we would struggle to allocate responsibility for this problem. This is due to the data being provided to the AI in training, based on which the decisions are made, which may reflect existing societal biases or may be the result of algorithmic design choices made by developers. As AI becomes more pervasive and integrated into vital decision-making processes, it is imperative that we are able to put into place a clear accountability process to ensure that these systems are used in an ethical, fair, and equitable manner.
Job displacements and economic impacts
The use of AI and automation is causing worry about the future of work and the possibility of many jobs being replaced. AI has the ability to change many industries, making things more productive and efficient but it also has the possibility to take away jobs and cause confusion in entire industries. An important note is that the jobs being substituted by AI-driven tools, may not be confined to low-skill or low-wage jobs. With the progress of AI technologies, even high-skilled jobs such as doctors, lawyers, and accountants are becoming more automated, potentially exposing the employment of these highly-skilled workers to uncertainty.
Job displacements and Transparency
The utilization of transparency in AI could be a viable partial solution to curb the growing concern about job loss caused by automation. With increased visibility into the workings and outcomes of AI systems, it becomes easier to discern the effects AI has on employment, including identifying which jobs are being replaced and which workers are impacted. Such information can then be harnessed to implement policies and programs that provide support to those who have lost their job due to AI-driven tools.
Privacy and security concerns
As the integration of AI in our lives increases, so does the amount of data stored about us, this raises security and privacy concerns as our sensitive data becomes more exposed to risk. AI systems are able to collect and store information about our health, finances, and personal preferences, as was done through Facebook in the Cambridge Analytica scandal, which could be used for malicious purposes if accessed by the wrong person. The concerns to privacy and security are increased by the fact that AI systems have the potential to reinforce preexisting biases and discriminatory practices. Organizations must develop strict privacy and security rules that guarantee the protection of sensitive data and the moral use of AI systems in order to reduce these threats.
Privacy and Security with Transparency
It is difficult to guarantee that the security and privacy of our data are being effectively maintained when there is a lack of transparency. For the sake of privacy and security, AI algorithms must acquire, store, and use data in a transparent manner. We can better identify possible threats and prevent others from abusing these technologies by making the inner workings of AI algorithms more transparent and understandable. Additionally, algorithms used in AI systems are not always transparent, which makes it hard to even become aware of and prevent privacy violations.
What do we do with this information?
The most efficient ML architecture at the moment, artificial neural networks, are also the most opaque, making it challenging to understand how they make judgments, bringing up the question of whether this mission towards transparency is even feasible. In order to make these models easier to understand and explain, efforts have been undertaken, and it is anticipated that rules and research initiatives will continue to develop in this area. But using AI responsibly and ethically will remain difficult until the transparency problem is fully resolved. We will be able to use AI more efficiently and recognize the judgments made by these systems after the transparency issue has been resolved, which will promote wider adoption and useful applications of AI technology.