In this blog, I discuss the idea of algorithmic bias through the example of AI face recognition systems. Many digital tools today use AI to scan or generate human faces, such as smartphone unlocking, airport security gates, and beauty filters on social media. These technologies often look objective, but researchers like Safiya Noble argue that algorithms are shaped by the data and values behind them, which can create hidden forms of discrimination. When I think about my own experience with AI-powered filters and facial tools, I can clearly see how bias appears in everyday digital interactions.
One of the most common examples of algorithmic bias is the way AI struggles to recognise darker skin tones. Several studies show that face-recognition systems have a much higher accuracy rate for light-skinned faces than for darker-skinned ones. This happens because many training datasets include more images of white users, which makes the system “learn” certain features better. Even though AI seems neutral, it actually reflects social inequalities found in the data. Ruha Benjamin describes this as the “New Jim Code”, where technology repeats old forms of social bias in new and invisible ways. This shows that technical systems can have social consequences.

I have also noticed algorithmic bias in AI beauty filters on platforms like TikTok and Instagram. Many filters automatically make the skin lighter, the face slimmer and the eyes bigger. The filter does not ask whether I want to look this way; it simply assumes these features are “better”. This makes me feel that the algorithm is promoting a narrow beauty standard. Even if the filter seems playful, it encodes cultural values about gender, beauty and desirability. This experience helps me understand how Noble’s idea of “algorithmic oppression” appears in everyday media use.
Another area where I noticed bias is in automatic photo tagging. When I upload group photos, the AI sometimes misidentifies people with similar skin tones or facial features, while it recognises lighter faces quickly and accurately. This makes me think about who is considered “normal” by the system. It also shows how algorithmic design can affect people’s identity and representation. If the AI keeps misidentifying certain users, it means the system works better for some faces than for others.
Overall, algorithmic bias reminds us that digital technologies are not neutral. They reflect the choices of designers, the limitations of data and the social context in which they are created. My experiences with AI filters and face-recognition technology show that these systems can reinforce narrow standards of beauty and accuracy. Understanding this helps me think more critically about the digital tools I use every day and encourages me to question the fairness of the technologies that shape how we see ourselves and others.
References list
Benjamin, R. (2019) Race after technology : abolitionist tools for the new Jim code. Cambridge, England ; Polity Press. Available at: https://ebookcentral.proquest.com/lib/westminster/detail.action?docID=5820427.
Noble, S.U. (2018) Algorithms of oppression : how search engines reinforce racism. 1st ed. New York, New York: New York University Press. Available at: https://doi.org/10.18574/9781479833641.

The blog insightfully connects algorithmic bias to everyday digital experiences, illustrating how facial recognition and beauty filters perpetuate systemic inequalities. However, while the discussion references key thinkers like Safiya Noble and Ruha Benjamin, it largely remains descriptive rather than interrogative. The argument could benefit from deeper engagement with the structural forces behind biased datasets—such as corporate priorities and global tech monopolies—rather than focusing primarily on user-level observations. Additionally, framing bias as a technical flaw risks obscuring its ideological roots in racialized and gendered norms. A more critical lens would question whether “fixing” algorithms addresses the underlying social hierarchies they reproduce.