Algorithmic bias:When”intelligence”also wears “colored glasses”

Have you ever wondered that when you submit your resume online but receive no response, it might not be because you are not good enough, but because the algorithm screening resumes secretly “discriminates” against your age or name? When you search for keywords but find that different groups see vastly different results, what lies behind this is exactly what we are going to talk about today – algorithmic bias. It is like an invisible mirror, reflecting the biases of human society and being quietly magnified and solidified in the digital world, becoming an invisible force that affects each of us.

Algorithms themselves are neutral strings of code. Why do they generate bias? There are two core reasons: First, the “inherent deficiency” of training data. The samples that algorithms learn from are often records of human social behavior, and these data themselves carry the traces of historical inequality. For instance, if a recruitment algorithm is trained with resumes from industries historically dominated by men, it will unconsciously “downgrade” female resumes. Google once had a situation where searching for black names was more likely to display arrest record ads. The essence was that the racial bias hidden in the data was learned by the algorithm. Second, the “acquired bias” in algorithm design. The implicit preferences of developers and the single objective of model optimization (such as only pursuing accuracy while ignoring fairness) can all cause algorithms to develop bias. For example, in medical algorithms, the probability of white patients getting ICU beds is several times that of black patients, because the model training ignored the medical data of minority groups, leading to distorted judgments of their health risks.

The harm of algorithmic bias has long permeated every aspect of our lives, often remaining “invisible” yet having far-reaching impacts. In the recruitment field, Workday’s recruitment algorithm has sparked a nationwide class-action lawsuit for discriminating against job seekers over 40 years old and minorities. Plaintiffs submitted over 100 resumes but were all rejected, with the algorithm’s reason being that their age and educational background did not match the “historical successful samples”. In the advertising and social media sectors, Facebook once allowed advertisers to exclude users of specific races through algorithms, and Google’s keyword planner recommended porn-related keywords for “black girls” and “Latina girls”, but not for “white girls”. These actions, seemingly generated automatically by algorithms, actually reinforce racial and gender stereotypes. In the judicial and security fields, predictive policing algorithms, due to the over-monitoring of minority communities in historical data, continue to direct police forces to these areas, creating a vicious cycle of “the more monitored, the more convicted; the more convicted, the more monitored”. The misidentification rate of facial recognition technology for people with darker skin tones is much higher than that for those with lighter skin tones, even leading to the wrongful arrest of innocent people.

What is even more worrying is that algorithmic bias can create a feedback loop, perpetuating social injustice. For instance, recommendation algorithms will keep pushing similar content based on your browsing history. If you miss out on a job opportunity due to algorithmic bias, it will be very difficult for you to receive similar recommendations in the future. Low-income groups, with fewer digital footprints, may be misjudged by algorithms as having poor creditworthiness, leading to loan rejections and making it even harder for them to escape poverty. This is like algorithms affixing “fixed labels” to different groups, reinforcing the status quo where the strong get stronger and the weak get weaker, posing the risk of a “digital caste system”.

We are not helpless in the face of algorithmic bias. Solving this problem requires the joint efforts of technology, policy, and every individual. Technically, developers need to incorporate “fairness constraints” into algorithm design, such as using “differential privacy” technology to hide sensitive information like race and gender, and filling data gaps for disadvantaged groups with synthetic data to avoid sample imbalance. From a policy perspective, the EU’s AI Act requires “high-risk” AI systems (such as those used in recruitment and judicial processes) to undergo bias assessment and disclose their decision-making logic. China has also introduced an algorithm filing system, implementing classified supervision over algorithms in areas like social recommendations and financial risk control. On an individual level, we must maintain “algorithmic vigilance” – when we notice abnormal search results, overly uniform recommended content, or unreasonable screening, we should be aware that this might be the result of algorithmic bias, and actively provide feedback and protect our own rights.

Algorithms are tools created by humans, and their “bias” is essentially the digital projection of human bias. What we pursue is not an “absolutely fair” algorithm, but an intelligent system that can “identify and correct bias”. After all, true technological progress should not be about allowing machines to replicate human flaws, but about making technology a force for promoting social fairness – when algorithms no longer wear “colored glasses” and every group is treated fairly, such “intelligence” is the intelligence we truly need.

In the future, as AI technology becomes more widespread, algorithms will permeate more fields. But please remember: the direction of technology is ultimately determined by humans. Let’s work together to promote “algorithmic goodness”, so that every line of code carries fairness and justice, and every person in the intelligent era can be treated with kindness.
Algorithmic Biases: When

Leave a Reply