American Univesity of Beirut

Artificial Intelligence and Gender Bias: What is Happening in the Background?

​​​​​By Reeda Al Saintbai 

Let us employ Google to translate “She is the president. He is cooking" to Persian, a gender-neutral language. Now, let us translate it back from Persian to English. It reads “He is the president. She is cooking." Notice anything different? How did the two pronouns get swapped? Well, Google is using Artificial Intelligence (AI) to translate. AI is a powerful tool yet could get biased and even discriminatory if not dealt with properly. Gender bias is one form of this inequity, defined as the prejudiced actions or thoughts against individuals based on their gender. The introductory example might not sound like a big deal, but when AI is utilized for important decision-making processes such as, recruitment, medical diagnosis, financial conclusions, and many others, it becomes vital for us to have unbiased decisions. This article will discuss reasons behind this gender bias, after which we will provide several solutions as a step towards an unbiased gender-neutral Artificial Intelligence.

But first, what is Artificial Intelligence? Briefly, AI is the 'intelligence' machines acquire when trained to think like humans, in the hopes of having the machine perform human tasks intelligently and efficiently. This training process is called Machine Learning (ML), portrayed when an AI model (algorithm) trains on input data for it to eventually generalize and learn how to make decisions on samples outside the data's scope. A simple example to demonstrate how AI works is a classification problem. Say we want an AI model to take in a picture and classify it as containing cats or dogs. For the data, we need pictures containing cats, labeled 'cats', for instance, and others containing dogs, labeled 'dogs'. For the training, an AI algorithm is used to teach the model characteristics that help it identify cats from dogs in images with various lighting, poses, distractions, etc. Remember, the model is learning and not memorizing, so we shall teach it how to think and differentiate between the samples, just like teaching a young child. After training, the model is tested for its cat-dog classifying ability by inputting new images it has not trained on and checking its output decision. AI is used in several day-to-day applications, like voice and face recognition, emotion analysis, advertisement generation, chatbots and virtual assistants, self-driving cars, trading and investment, fake news detection, and the list goes on. So, since humans immensely depend on AI, the latter should not involve any models that discriminate against a certain gender, sex, race, ethnicity, or any other form of discrimination.

As mentioned before, AI's source for learning is its data. So, if the data is biased, AI will unsurprisingly be biased. Corporations tend to use large historical datasets, for instance employee records over the past hundred years. However, historical data is not representative enough of our ever-changing society since it might contain biased records as shown in the following real-life example. Genevieve Smith, author of The Playbook on Mitigating Bias in AI, faced an incident that occurred with her and her husband. They both had the same financial requirements when applying to a credit card, but surprisingly, her husband was given a card limit double than hers. Turns out the AI model used for deciding was trained on historical data, and in the past, a person's marital status and gender were used to decide on the card's limit, so the woman's historical lower creditworthiness was an effect of the AI's present bias.

Another example of AI's troublesome data is in the medical field. As mentioned by Dr. Kate Young, the health system was originally researched and designed on men's bodies. Hence, this underrepresentation of women impacts AI's decision when it comes to medical diagnosis. People usually depend on AI models because of their robust analysis and outstanding accuracies – they might even make us question our conclusions if the latter were to be unaligned with theirs – but we should always remember that these decisions are extracted from data and are not to be followed blindly without inspection for any data bias.

Just like the unfairness described in the financial and medical fields, recruitment decision-making is also affected by biased AI. If models learn our societal job stereotypes, they might reject competent candidates based on their gender. For example, knowing that we do not have as many women in the tech industry as their male counterparts, then if an AI model is being used to filter down resumes for a software engineering position, the model might link gender to job acceptance. It might think that because it sees a pattern of men acceptances, then women are underqualified and hence should have their applications rejected. These stereotypes of what a woman and a man can be are rooted in algorithms, where we have tested DistilGPT-2, a popular language model that uses AI to predict the next portion of an input text, and got the following gendered results:

As observed, the AI model is predicting the person to be 'female' when the role is a nurse or a schoolteacher and 'male' when the role is a president or a CEO. This is clearly a biased decision and depends on the data that unintendedly taught the model to link gender to job role.

Another reason for AI's gender bias is the labels used on the datasets. Labels are often used as a ground truth or reference for the AI model to train correctly, and most of the time, humans do the labeling. But humans can be subconsciously biased, and they often make assumptions while labeling without taking into consideration gender requirements. These labels reflect how society sees men and women, where one study revealed how images of the US Congress members were annotated: women's pictures had “girl" or even “teen" labels, while men's pictures had “official", “senior executive", or “attorney" labels (the people in the images dressed the same and were in the same setting).

AI is getting more powerful by the minute, and it is our job to objectivize its gender decisions. Fortunately, some stereotypes are nowadays being avoided in AI applications, such as voice assistants not always having female voices (psychologically indicating being submissive and following orders). Also, back to Google Translate's example, if we translate “She is president" alone to Persian and back to English, we will get “She is president / He is president" showing Google's appreciated attempt towards a gender-neutral translation. However, our introductory example shows the biased data AI has originally learned from, which necessitates the continuous update of data, awareness of unjust trends, and objective evaluation of AI's decisions. Moreover, all genders should be represented in a dataset, and they should all be part of the labeling process. A demonstrative example is an AI algorithm selecting individuals for a modeling agency out of thousands of applicants. Not only should all genders be represented in the AI algorithm's training data, but also all races, body types, facial features, etc. This inclusion is necessary to represent our diverse communities as well as be more relatable to the customer. And as such, the people labeling the models' shots should themselves demonstrate this variety to avoid bias. As mentioned before, humans might subconsciously perform biased decisions, so the inclusion of diverse label makers would help neutralize AI's sensitive data. Finally, the greatest responsibility is that of AI engineers who work closely with the data and algorithms. Without diving into technicalities, data scientists and engineers could study the data's features, figure out bias, augment and neutralize the data whenever possible, and even use functions and parameters that penalize the AI model when making certain unwanted decisions. Also, AI engineers ought to have the culture of AI's gender equality starting from the courses they learn, such as courses on the psychology and ethics of AI, up to practicing their neutral stances with real-life AI models.

Wrapping up, Artificial Intelligence demonstrates bias when trained on biased data, a result of historical datasets or unjust data labeling. AI has surfaced gender bias to our conscious sight, with the inequality being deeply rooted in our society for a long time. It is crucial for us to act now, since AI's data is often used, reused, and propagated. Artificial Intelligence's gigantic computational power can magnify and replicate a biased trend, so to prevent dangerous consequences, human intelligence shall interfere!

Contact Us

For various questions, please try contacting us via social media first!
read more

Privacy Statement

We take data privacy seriously and adhere to all applicable data privacy laws and regulations.
read more

Copyright and Disclaimer

Written permission is needed to copy or disseminate all or part of the materials on the AUB website.
read more

Title IX, Non-Discrimination, and Anti-Discriminatory Harassment

AUB is committed to providing a safe, respectful, and inclusive environment to all members of its community.
read more