Check your privilege, ChatGPT… or check mine?
Is AI racist? Homophobic? Sexist? Discriminatory? Insensitive? These aren’t just provocative questions — they’re real concerns we face in our daily work with AI. At Hatch, we work closely with AI development, and bias is something we watch for constantly. Whether it’s building features to support body positivity on a major visual platform, or navigating cultural nuances in an AI-powered English learning tool, we’ve seen how identity, privilege, and perception are deeply intertwined with machine behavior. As AI becomes increasingly embedded in our lives, it both reinforces and challenges privilege and dynamics of privilege and identity. One major concern is privilege escalation, where AI systems unintentionally deepen gaps in access, representation, or influence. This applies also to lived experiences. While developing an AI tool for veterans’ mental health training, we found that users needed a more human-like, emotionally attuned interaction. Some veterans, dealing with trauma, felt alienated by the tool’s overly cheerful tone. Empathy, not positivity, was key to trust. At Hatch, we believe that AI can do better. AI has the potential to be more inclusive, more empathetic, and more planet-conscious. But only if we build it that way. We’re not just advocating for responsible use—we’re actively working to leverage AI for good.
Several AIs, several discriminations
Artificial Intelligence (AI) comes in many forms, each with distinct functions and implications. In this article, we will focus on five main types of AI use and their potential biases: generative AIs, selective AIs, controllers AIs, comparative AIs, and AI-based opinion and news generators. Disclaimer: this isn’t a formal classification. It’s a practical framework we use to explore how AI systems, depending on what they’re built to do, can reproduce, reinforce, or reshape inequalities. These categories are based on real-world use cases, not technical definitions. Our goal is to make the discussion focused, concrete, and human-centered.Generative echoes: when AI learns our worst habits
Generative AIs create content by learning from human data—biases included. They often underrepresent gender and racial minority perspectives, or reinforce outdated assumptions about competence and history. For example, a study of three popular AI image generators found that when prompted with “surgeon,” over 98% of the results depicted white men. In another case, a reporter at MIT Technology Review discovered that Lensa, an AI avatar app, hypersexualized her image. Broader research confirmed that the tool often lightens skin and exaggerates femininity—more than aesthetic errors, these are algorithmic reflections of deep-rooted bias.
Designing inequality at scale: when AI chooses who gets in
Selective AIs assist in decision-making processes like hiring or credit distribution—but their reliance on historical data often leads to discrimination. In 2018, Amazon shut down a recruitment AI that penalized resumes with the word “women’s.” In 2019, Apple’s credit algorithm gave significantly lower limits to women than men with identical financial profiles. Facial recognition systems show similar bias. Research by Joy Buolamwini found error rates of over 34% for darker-skinned women, compared to less than 1% for lighter-skinned men. Predictive policing tools trained on biased crime data have also led to the over-policing of minority neighborhoods, reinforcing systemic inequality.The control trap: when AI becomes a gatekeeper
Controller AIs flag anomalies and enforce rules—but when their rules are biased, they deepen inequity. In France, an algorithm designed to detect welfare fraud disproportionately targeted single mothers and disabled individuals. In education, AI plagiarism detection tools have been found to unfairly flag non-native English speakers. A Stanford study showed false positives for these students at a rate over 60%, compared to just 5% for native speakers.
Biased by comparison: when AI trains on wrong or incomplete data
Comparative AIs find patterns across large datasets, but their effectiveness depends on data quality. MIT researchers found that medical AIs often performed worse on images of women and people of color. Another global study showed consistent underdiagnosis of Black women across X-ray datasets. At Hatch, we encountered this too. While working on forest preservation in Brazil, an AI model trained only on U.S. data failed to recognize key features of the Amazon. When AI is trained out of context, it can misfire—and cause harm.
Misled synthetic content: when AI gets it wrong
AI-generated content is rapidly transforming how information is created, shared, and perceived—often with unintended consequences for business. In marketing, branding, and corporate communication, generative AI can amplify bias, distort facts, and create reputational risks. Systems trained on unbalanced or unrepresentative data may unintentionally reinforce cultural, regional, or ideological assumptions. For example, when trained primarily on Western-centric content, AI can produce promotional material, messaging, or internal reports that alienate global audiences or misrepresent key markets—a concern raised by AI ethics expert Margaret Mitchell in Wired. Beyond tone and framing, factual reliability is a major concern. Generative AI has been shown to fabricate data, misidentify individuals or products, and produce authoritative-sounding—but inaccurate—statements. In a business context, this can translate to inaccurate product descriptions, misleading investor briefs, or flawed automated customer communications. A single AI-generated error—such as falsely announcing a leadership change or misattributing a quote—could trigger investor anxiety, damage consumer trust, and lower internal morale. The business impact is tangible:- Erosion of consumer trust, as seen in the case of Thy, the AI-generated Australian radio host who went undetected for months.
- Negative brand perception, like the backlash Coca-Cola faced after releasing its AI-generated Christmas ad.
- Potential legal exposure, from chatbot miscommunication or false claims.
- Reduced stakeholder confidence if transparency is compromised.
Advocating AI for good
Despite these risks, AI can be a force for equity—if designed intentionally. Unilever improved diversity across 50+ countries by excluding demographic data from its AI hiring tool. Google’s Monk Skin Tone Scale made AI better at recognizing diverse skin tones, improving both beauty and medical applications. ZestFinance reduced racial bias in credit scoring by including real-life indicators like rent payments. Microsoft’s Seeing AI helps blind users by reading text, recognizing faces, and describing surroundings—so long as it’s trained on inclusive data.Conclusion: AI reflects its creators
AI is not inherently fair or unfair—it reflects the data it’s trained on, and the intentions of those who build and deploy it. Because data often carries the weight of historical and systemic bias, AI can easily reproduce-and even amplify-existing inequalities. Discrimination isn’t always explicit. The most difficult to detect-and therefore the hardest to dismantle-is the kind that’s systemic, deeply embedded in structures, habits, and assumptions. AI will not fix this for us. And neither will humans, unless we’re willing to do the ongoing work of questioning our own bias and centering the voices of those most affected. It’s not enough to perform inclusivity on the surface. Token gestures do little if diverse perspectives are not given power, influence, and leadership in shaping how AI is researched, designed, and deployed. At Hatch, we regularly draw on publicly available frameworks to help navigate the complex terrain of responsible AI development and bias mitigation. Two that we find particularly valuable are the Microsoft Responsible AI Standard and PAIR (People + AI Research) from Google. The latter offers not just principles, but practical toolkits and processes designed to help teams identify and reduce risks like discrimination, exclusion, and misdirection—early and often. When designing or working with AI—or any system that impacts people—we believe it's essential to start with the same kinds of critical, self-aware questions that guide ethical research and inclusive design:- What identities and privileges do I hold? How do these shape my assumptions?
- Whose perspectives am I centering-and whose am I missing?
- Who is represented in the data, and who is left out?
- Is the data complete and contextual-or is it skewed in ways that reflect existing inequalities?
- What assumptions are built into this system? Do they reflect a narrow worldview?
- Have historically marginalized groups been meaningfully involved in the process?
- Who benefits from this system-and who could be harmed?
- Am I using inclusive and accurate language?
- Have I sought feedback from people with different lived experiences?
- Am I open to being wrong-and committed to keep learning and adjusting?
Several AIs, several discriminations
Artificial Intelligence (AI) comes in many forms, each with distinct functions and implications. In this article, we will focus on five main types of AI use and their potential biases: generative AIs, selective AIs, controllers AIs, comparative AIs, and AI-based opinion and news generators. Disclaimer: this isn’t a formal classification. It’s a practical framework we use to explore how AI systems, depending on what they’re built to do, can reproduce, reinforce, or reshape inequalities. These categories are based on real-world use cases, not technical definitions. Our goal is to make the discussion focused, concrete, and human-centered.Generative echoes: when AI learns our worst habits
Generative AIs create content by learning from human data—biases included. They often underrepresent gender and racial minority perspectives, or reinforce outdated assumptions about competence and history. For example, a study of three popular AI image generators found that when prompted with “surgeon,” over 98% of the results depicted white men. In another case, a reporter at MIT Technology Review discovered that Lensa, an AI avatar app, hypersexualized her image. Broader research confirmed that the tool often lightens skin and exaggerates femininity—more than aesthetic errors, these are algorithmic reflections of deep-rooted bias.
Designing inequality at scale: when AI chooses who gets in
Selective AIs assist in decision-making processes like hiring or credit distribution—but their reliance on historical data often leads to discrimination. In 2018, Amazon shut down a recruitment AI that penalized resumes with the word “women’s.” In 2019, Apple’s credit algorithm gave significantly lower limits to women than men with identical financial profiles. Facial recognition systems show similar bias. Research by Joy Buolamwini found error rates of over 34% for darker-skinned women, compared to less than 1% for lighter-skinned men. Predictive policing tools trained on biased crime data have also led to the over-policing of minority neighborhoods, reinforcing systemic inequality.The control trap: when AI becomes a gatekeeper
Controller AIs flag anomalies and enforce rules—but when their rules are biased, they deepen inequity. In France, an algorithm designed to detect welfare fraud disproportionately targeted single mothers and disabled individuals. In education, AI plagiarism detection tools have been found to unfairly flag non-native English speakers. A Stanford study showed false positives for these students at a rate over 60%, compared to just 5% for native speakers.
Biased by comparison: when AI trains on wrong or incomplete data
Comparative AIs find patterns across large datasets, but their effectiveness depends on data quality. MIT researchers found that medical AIs often performed worse on images of women and people of color. Another global study showed consistent underdiagnosis of Black women across X-ray datasets. At Hatch, we encountered this too. While working on forest preservation in Brazil, an AI model trained only on U.S. data failed to recognize key features of the Amazon. When AI is trained out of context, it can misfire—and cause harm.
Misled synthetic content: when AI gets it wrong
AI-generated content is rapidly transforming how information is created, shared, and perceived—often with unintended consequences for business. In marketing, branding, and corporate communication, generative AI can amplify bias, distort facts, and create reputational risks. Systems trained on unbalanced or unrepresentative data may unintentionally reinforce cultural, regional, or ideological assumptions. For example, when trained primarily on Western-centric content, AI can produce promotional material, messaging, or internal reports that alienate global audiences or misrepresent key markets—a concern raised by AI ethics expert Margaret Mitchell in Wired. Beyond tone and framing, factual reliability is a major concern. Generative AI has been shown to fabricate data, misidentify individuals or products, and produce authoritative-sounding—but inaccurate—statements. In a business context, this can translate to inaccurate product descriptions, misleading investor briefs, or flawed automated customer communications. A single AI-generated error—such as falsely announcing a leadership change or misattributing a quote—could trigger investor anxiety, damage consumer trust, and lower internal morale. The business impact is tangible:- Erosion of consumer trust, as seen in the case of Thy, the AI-generated Australian radio host who went undetected for months.
- Negative brand perception, like the backlash Coca-Cola faced after releasing its AI-generated Christmas ad.
- Potential legal exposure, from chatbot miscommunication or false claims.
- Reduced stakeholder confidence if transparency is compromised.
Advocating AI for good
Despite these risks, AI can be a force for equity—if designed intentionally. Unilever improved diversity across 50+ countries by excluding demographic data from its AI hiring tool. Google’s Monk Skin Tone Scale made AI better at recognizing diverse skin tones, improving both beauty and medical applications. ZestFinance reduced racial bias in credit scoring by including real-life indicators like rent payments. Microsoft’s Seeing AI helps blind users by reading text, recognizing faces, and describing surroundings—so long as it’s trained on inclusive data.Conclusion: AI reflects its creators
AI is not inherently fair or unfair—it reflects the data it’s trained on, and the intentions of those who build and deploy it. Because data often carries the weight of historical and systemic bias, AI can easily reproduce-and even amplify-existing inequalities. Discrimination isn’t always explicit. The most difficult to detect-and therefore the hardest to dismantle-is the kind that’s systemic, deeply embedded in structures, habits, and assumptions. AI will not fix this for us. And neither will humans, unless we’re willing to do the ongoing work of questioning our own bias and centering the voices of those most affected. It’s not enough to perform inclusivity on the surface. Token gestures do little if diverse perspectives are not given power, influence, and leadership in shaping how AI is researched, designed, and deployed. At Hatch, we regularly draw on publicly available frameworks to help navigate the complex terrain of responsible AI development and bias mitigation. Two that we find particularly valuable are the Microsoft Responsible AI Standard and PAIR (People + AI Research) from Google. The latter offers not just principles, but practical toolkits and processes designed to help teams identify and reduce risks like discrimination, exclusion, and misdirection—early and often. When designing or working with AI—or any system that impacts people—we believe it's essential to start with the same kinds of critical, self-aware questions that guide ethical research and inclusive design:- What identities and privileges do I hold? How do these shape my assumptions?
- Whose perspectives am I centering-and whose am I missing?
- Who is represented in the data, and who is left out?
- Is the data complete and contextual-or is it skewed in ways that reflect existing inequalities?
- What assumptions are built into this system? Do they reflect a narrow worldview?
- Have historically marginalized groups been meaningfully involved in the process?
- Who benefits from this system-and who could be harmed?
- Am I using inclusive and accurate language?
- Have I sought feedback from people with different lived experiences?
- Am I open to being wrong-and committed to keep learning and adjusting?
