Comparison is the thief of joy, and labels are the thief of curiosity.
The moment we become familiar with a thing, we label it - that pink bird, that’s a Flamingo, this red fruit, well, that’s an apple. The entire process is justified by the promise that recollecting this information will be more accessible when we experience it again. It’s convenient.
Convenience comes at a cost, always. You always give up something for it—fast food, fast fashion, and now - fast labels. When we have a new experience, we label it. Trying to correlate it with a similar experience and done - that forms the basis for the new label.
When you label something, you stop being curious - it signals the brain to stop thinking about it and find the next thing to label.
Like AI, we look at people and instantly bucket them into labels. If they are above a certain body weight, that person is labeled fat. If the melanin content is high, the person is labeled as dark. It’s exactly how the AI does it. It calculates the variance and compares it to the average. See, humans built AI, not the other way around, at least not when I write this.
The question is how this “average” is determined.
The answer is data. We collect this data over the years. From our experiences, we keep adding to it. However, this data is impure because of one element - biases. My data will differ from yours because my bias is different from yours. My experiences are different from yours. A man with a height of 5 feet 6 inches will be considered tall in Indonesia (average male height: 5 feet 4 inches) and short in the Netherlands (average male height: 6 feet). The person is the same, only what you’re comparing it to, “the mean” is different.
Now what went wrong with Google’s Gemini?
Gemini, Google’s image generation model, recently received a lot of slack. When prompted to generate a portrait of “America’s founding fathers,” the model responded with images of no white Americans. For a picture of a Pope, the model generated an Indian woman in pope’s attire and a black man.
In a blog post, Google admitted to their mistake - acknowledging that some images generated were inaccurate and offensive. They tried to optimize the data to avoid the underrepresentation of certain ethnicities - and try to remove any bias. But it failed to acknowledge the part where it’s not supposed to do so.
Google has billions of dollars in resources but still couldn’t get the bias part right because bias itself is made of a billion parts. When you create a label, that label is personal to you - to your experiences. Without context for the society, it’s wron. You can fine-tune AI models, it’s a fancy term for customize the models for your use case, and that’s where Google went wrong. They tried to generalize a personalized thing.
Google’s training data, the data used to calculate the mean - was labeled “woke”. As a result, all the images generated using the model were labeled woke as well.
Limitation or liberation?
For example, the AI labeled the object as a bicycle in the above image. If I have to describe the object to someone, it’s easier to say bicycle rather than something with two wheels joined by various pieces of metal used for transportation. This is a liberation. It’s liberated me from using a hundred words by just using one.
Now, I label someone as short-tempered. Is this a liberation or limitation? That’s subjective. When I label someone, I return to my experience data set, add my bias, and impact your opinion by default. I am limiting your opinion.
Gotta Catch Them All
We go through life wanting to collect labels like Pokemon cards, finding solace in them, making it convenient for others to remember and associate with us. They define us. But effectively, labels limit us.
Society tends to hold you accountable for your label. If you’re an engineer, be an engineer. If you’re doing a job, continue; don’t quit. Don’t vary from the mean. Be vanilla. Your label is your identity, and your identity is your label.
It begs the question - What would a world be without labels?
What would a world be without bias, without conditioning - where we take everything as is, not comparing it with our past experiences, not labeling it for future reference? If the present is all that matters. You are judged for who you are and not what your forefathers did, not the labeled degree you have, but based on what you contribute. Would that world be considered fair? Or would that be considered ultra-woke?
Is that liberating or limiting - ask yourself without any bias.