The Issue
Justitia is the blindfolded Goddess that guards the integrity of our human judgment. Mythology has introduced her as the daughter of the Sky and Earth, and the mother of Justice, Peace, and Lawful Government. Courtrooms channel her symbolism to help us be valued and judged by our actions, not our appearance. With Artificial Intelligence (AI) being introduced to our society as an infallible teller of truth, it is tempting to think these systems carry Justitia’s blindfold when passing judgment and making decisions. However, this is not the case. As we will see, these models have the power to undo the work of generations of women fighting for societal fairness and equality since the first wave of feminism in the late nineteenth century. Factors such as the strong underrepresentation of women in the tech industry and AI training data, as well as existing implicit societal biases being adopted into these systems, reinforce and can exacerbate the problems women of today deal with. As it turns out, AI peeks through the blindfold and, rather than encoding an objective view of our world, it encodes our very own way of seeing. To level the playing field and create the balance we value in society, it is therefore important to counter-balance these inequalities with positive discrimination towards women.
A Typical Woman
Female representation in our media often reflects a one-dimensional view of women, commonly showcasing them through the trope of the Manic Pixie Dream Girl: a quirky, whimsical girl with an odd sense of fashion, and uncertainty about her identity and place in this world–but most of all, her primary role is to guide the male protagonist on his journey to self-development and adulthood, to then disappear out of his life forever. Moreover, only half of the movies we see pass the Bechdel Test, a test where there is at least one scene where two named female characters discuss something, anything, that isn’t related to a man (95.31% pass the reverse–a scene featuring named men who talk to each other about something besides a woman). Because of the normalized view of women in the passenger’s seat, many probably haven’t noticed the stereotypical representation of women in AI which is becoming yet another way of reinforcing unrealistic gender stereotypes. Anything from anthropomorphization of assisting systems with female voices and associated agreeable behaviors in reaction to harassment (such as Siri and Google Assistant), to the majority of deep fake content being pornographic images of nonconsenting women, and generative AI such as text-to-image models showing significant gender biases, these systems end up reflecting and reinforcing the historically favored and alarming view of women as submissive and inferior.
Female Shortage
This skewed representation of women extends beyond media representation to the professional arena, particularly in societally influential fields, such as science, technology, engineering, and mathematics (STEM), and leadership roles in both academic and corporate settings. Despite equal capabilities and leadership potential, career growth differs for men and women, leading to the loss of female talent in the fields which they operate in. With the lack of independent women in STEM fields and leading positions (e.g., politics and company leaders) as role models, fewer young women are motivated to occupy careers in positions that create the policies and algorithms that impact societal views and structures on a large scale. Despite women being the focus of most harmful deepfake content and being subjected to opportunity inequality in the workforce enabled by algorithm-based selection processes, women are active in less than 26% of fields that dictate how AI models are developed, which metrics they use, and what they are used for. Ultimately, this takes a large chunk of control over women’s lives out of their own hands.
The Pitfalls of Biased Data
Skewed female representation leads to skewed training data and algorithms, and the repercussions on system outcomes are profound. Sexism in society has only recently begun to find a healthy equilibrium, allowing for a better sense of safety and equality for women. Yet, when AI systems are trained on current data, they inadvertently encode and perpetuate societal biases. Technology does not magically provide us with the nuanced truth that we humans just could not find. Rather, it learns existing and historical societal patterns, and systems subsequently become embedded with our human biases. AI-generated decisions about what a lawyer might look like, who is considered as qualified for a job, or how much someone has to pay for insurance therefore reflect these deep-seated biases. Aside from direct decision-making, Large Language Models (LLMs) help with tasks such as emotional text analysis, creative writing, semantic parsing, and conversational assistance. Consequently, AI adapting biased views can influence societal perceptions and shared content. An example of how skewed training data and underrepresentation of female algorithm developers can become life-threatening is when women are greatly overlooked in medical research, where male participants generally serve as proxies for women. Models trained on this kind of data end up misdiagnosing, underdiagnosing, and prescribing wrong medications to female patients, which results in women losing trust in the medical system.
Why Positive Discrimination is the Key
Women are part of a bias cycle; gender bias in society causes biased datasets used for algorithmic training, which in turn is used for the decision-making that reinforces those very societal biases. Positive discrimination is an active way to restore equal opportunities and empower women by giving them more attention.
The obvious way for positive discrimination to affect this bias cycle would be to make a difference in the workforce, by either favoring women in the hiring process or stimulating more women to work in STEM fields and leading positions. Increasing female participation promotes workplace diversity, increases representation in leadership roles, and addresses gender disparities in politics. The more women in the industry, the more control they have over their own representation. This is helpful because gender stereotypes are most likely to be challenged when people actively create content on digital platforms since this is the primary way of information sharing in today’s society. Furthermore, by including more women in training data for typically male roles, it directly affects the inclusivity of AI decision-making.
Besides the hiring process, something as simple as giving women the opportunity and chance to learn new skills to develop their abilities further tackle gender disparities with educational empowerment. Virtual reality (VR), for example, can be used to replace the biased human feedback women receive in medical settings. This is just one example of how positive discrimination can provide opportunities to women without simply hiring them into the field.
But You May Ask…
Some people believe that using positive discrimination is not the way to go and that it goes against the principle that individuals should be treated equally, insinuating that positive discrimination isn’t effective because it promotes unequal opportunities, only now directed at majority groups. While sound in theory, this argument fails to understand that this discrimination is needed even to begin to rectify the effects of years of historical sexism.
Now, some critics toss around the term “meritocracy” like it’s the holy grail of fairness, selecting candidates based on ability and skill, which can be unfair to people more qualified and deserving. And yes, it is worth noting that people should not just be hired based on their gender or race. But what people often fail to notice is that there may be more than what they deem as “valuable” in a person, and that what is considered “valuable” is based on biased views to begin with. Thus, meritocracy arguably also reproduces a biased way of thinking. This leads to gender stereotypes affecting the judgment of what people believe are valuable traits. For example, when asked to self-evaluate their leadership skills women tend to underrate, while men tend to overrate their leadership skills relative to the evaluations of their co-workers. Skills like empathy, often undervalued, are inherently female but could potentially enhance company structures.
The Answer: Positive Discrimination
Positive discrimination in AI for women isn’t just about leveling the playing field; it’s an important step in breaking the biased cycle. The goal shouldn’t be gender uniformity, but instead ensuring that AI outputs avoid perpetuating bias. From skewed media representations to underrepresentation in AI, the impact is large. Addressing these issues with training data and the gender ratio of engineers is crucial to avoid perpetuating oppressive dynamics, ensuring AI output reflects scientific fact rather than social bias. Critics argue against it, citing equality concerns or the need for meritocracy, but positive discrimination is vital to rectify historical sexism. It’s not about excluding majority groups but transforming social norms. In a perfect world, we would have a reset button on gender bias, but in today’s society, we need the active practice of positive discrimination towards women to make a real and positive difference.