Algorithmic Biases in AI Face Recognition

In this blog, I discuss the idea of algorithmic bias through the example of AI face recognition systems. Many digital tools today use AI to scan or generate human faces, such as smartphone unlocking, airport security gates, and beauty filters on social media. These technologies often look objective, but researchers like Safiya Noble argue that algorithms are shaped by the data and values behind them, which can create hidden forms of discrimination. When I think about my own experience with AI-powered filters and facial tools, I can clearly see how bias appears in everyday digital interactions.

One of the most common examples of algorithmic bias is the way AI struggles to recognise darker skin tones. Research by Buolamwini and Gebru (2018) found that facial-recognition systems were far less accurate for darker-skinned faces, especially for women, compared to light-skinned users. This happens because many training datasets include more images of white users, which makes the system “learn” certain features better. Even though AI seems neutral, it actually reflects social inequalities found in the data.

Example of an AI beauty filter that automatically lightens skin and smooths facial features.

I have also noticed algorithmic bias in AI beauty filters on platforms like TikTok and Instagram. Many filters automatically make the skin lighter, the face slimmer and the eyes bigger. The filter does not ask whether I want to look this way; it simply assumes these features are “better”. This makes me feel that the algorithm is promoting a narrow beauty standard. Even if the filter seems playful, it encodes cultural values about gender, beauty and desirability. This experience helps me understand how Noble’s idea of “algorithmic oppression” appears in everyday media use.

Another area where I noticed bias is in automatic photo tagging. When I upload group photos, the AI sometimes misidentifies people with similar skin tones or facial features, while it recognises lighter faces quickly and accurately. This makes me think about who is considered “normal” by the system. It also shows how algorithmic design can affect people’s identity and representation. If the AI keeps misidentifying certain users, it means the system works better for some faces than for others.

Overall, algorithmic bias reminds us that digital technologies are not neutral. They reflect the choices of designers, the limitations of data and the social context in which they are created. My experiences with AI filters and face-recognition technology show that these systems can reinforce narrow standards of beauty and accuracy. Understanding this helps me think more critically about the digital tools I use every day and encourages me to question the fairness of the technologies that shape how we see ourselves and others.

References list

Buolamwini, J., & Gebru, T. (2018, January). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77-91). PMLR.

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. In Algorithms of oppression. New York university press.

2 thoughts on “Algorithmic Biases in AI Face Recognition

  1. The blog insightfully connects algorithmic bias to everyday digital experiences, illustrating how facial recognition and beauty filters perpetuate systemic inequalities. However, while the discussion references key thinkers like Safiya Noble and Ruha Benjamin, it largely remains descriptive rather than interrogative. The argument could benefit from deeper engagement with the structural forces behind biased datasets—such as corporate priorities and global tech monopolies—rather than focusing primarily on user-level observations. Additionally, framing bias as a technical flaw risks obscuring its ideological roots in racialized and gendered norms. A more critical lens would question whether “fixing” algorithms addresses the underlying social hierarchies they reproduce.

  2. I agree with your point of view. The AI recognition system gives the impression of a traditional white man’s eyes. It is somewhat face-blind to people with darker skin tones or those of other ethnicities and women. In fact, it is a dangerous thing that such data deviations have not been widely popularized. After all, in many non-light-skinned countries, this technology has been used in important scenarios such as school enrollment and bank withdrawals. People’s blind trust in algorithmic deviations is extremely terrifying

Leave a Reply