Every time we scroll through our phones, we are participating in what Couldry and Mejias call “data colonialism.” It’s a striking term, but it makes sense when you think about it. Just as colonial powers once extracted physical resources like land and labour, today’s tech companies extract something equally valuable – our behaviour. Every click, like, and search gets turned into data that companies can analyse and profit from. We’ve become the resource.
What makes this even more complicated is algorithmic bias. We tend to think of algorithms as objective that are just neutral bits of code doing their job. But here’s the thing, they’re built on data that already reflects our society’s inequalities. So without anyone intending it to happen, these systems end up reproducing the same discriminatory patterns we see offline. Algorithms decide what we see, how we’re categorised, and ultimately how we’re represented online.
I’ve noticed this in my own life, particularly on TikTok’s “For You Page.” The content that gets pushed to me tends to reflect pretty narrow standards of beauty and lifestyle. Other types of representation? They’re there, but I have to actively hunt them down. This aligns with Noble’s argument that algorithms reinforce dominant cultural values, and amplify certain identities while pushing others aside. The recommendation system creates this hierarchy of visibility, deciding which creators get the spotlight and which ones don’t. Through the data colonialism lens, it’s a modern way of sorting and ranking people based on the “digital breadcrumbs” we leave behind.
Apple Music does something similar. It’s supposed to be personalised, but I keep getting recommended mainstream music, even when I’m exploring other genres. The algorithm’s been trained on global listening data that’s dominated by major labels and popular markets, so smaller artists barely get a look-in. It’s algorithmic bias narrowing my cultural horizons and keeping existing hierarchies firmly in place.

Then there’s Instagram’s auto-tagging feature.I have noticed it’s much better at recognising lighter skin tones than darker ones. Turns out, many facial-recognition systems are trained on datasets with mostly white faces, which leads to higher error rates for everyone else. It’s a stark reminder that these supposedly cutting-edge technologies work better for people who already have social privilege. The same power structures and historical inequalities that exist offline are built right into our digital tools.
These aren’t just isolated glitches. They show how data colonialism works. It’s not only about extracting our data but about how algorithms then interpret and classify us. What looks like a neutral digital space actually gives certain users more visibility, better accuracy, and greater representation than others. It really highlights why we need to pay attention to how digital infrastructure creates new forms of inequality, even when we can’t see the mechanisms at work.
Looking at data colonialism and algorithmic bias has changed how I think about my everyday media use. These platforms aren’t just providing entertainment or making life more convenient. They’re actively shaping how identities get constructed, how knowledge spreads, and whose experiences get recognised. Everybody needs to be more aware of these patterns and should engage with digital life more critically and consciously.
References
Couldry, N. and Mejias, U. (2019) The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism. Stanford: Stanford University Press.
Noble, S. U. (2018) Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.

Thoughtful and compelling read. You translate “data colonialism” and algorithmic bias into lived experience—FYP, Apple Music, and Instagram—without losing the theoretical thread (Couldry/Mejias, Noble). I especially appreciate how you show visibility as an allocation of power, not just a UX quirk. Two gentle suggestions: consider adding a brief note on user agency (e.g., intentional “reverse curation,” privacy settings, diverse follows) and a pointer to accountability mechanisms (dataset audits, impact assessments, appeals). A sentence on how these systems differ across regions could also enrich the argument. Overall, clear, grounded, and persuasive—would love to read a follow-up with sources and practical examples.
Interesting way of putting it into a blog. This is actually real everyone has a different algorithm based on how much time a person spends watching or listening. They collect data and send similar artists or content to consume so that everyone is “entertained.” This digital world is designed so that we’re constantly on our phones and never put them away. Our data is being collected on a daily basis, and like you mentioned, Apple Music or Instagram show things we’re interested in because the data shows how much time has been spent on them.