Global Responsible AI: Inclusiveness and Accountability

This is part one of a series of articles where we take an in-depth look into the Responsible AI Framework and share our insights from 15 international projects in the last 3 years that included the use of responsible AI. Originally developed by Microsoft in 2017, Hatch Studios adapted the responsible AI framework to help companies develop responsible AI systems that work across countries, cultures, and languages.

March 19, 2024
7-8 Minutes read

Key Takeaways:
  • Deploying AI systems globally introduces new, unique challenges that must be addressed.
  • Building globally inclusive AI requires empathetic design, a “human-in-the-loop” approach” and a robust data strategy.
  • Building accountable AI requires human governance that transcends personal beliefs and biases.
What is Global Responsible AI?

Seeing the emergence of AI systems in commercial and public solutions, Microsoft established an advisory committee called Aether in 2017. This committee conducted research into AI technologies, processes, and best practices for developing ethical and responsible AI systems. Hatch Studio built upon this framework, expanding it to account for the cultural nuances that AI systems will inevitably face when launched globally. In other words,global responsible AI is an approach to developing AI systems responsibly and in ways that warrant people’s trust worldwide. In each article in this series, we’re looking at the complex pillars upholding this framework, from a global lens. And we’re kicking things off with a long-standing pillar of any good technology — ethics.

Making AI Inclusive — Globally 

Building inclusivity in AI systems means building for people of all abilities. Taking into account potential barriers, unique experiences, and cultural nuances to ensure that our AI does not unintentionally exclude people from using the product or service to its fullest. But on a global stage, AI systems have to address exponentially more people that are exponentially more diverse. This presents unique challenges to maintaining inclusiveness in global AI systems — it’s difficult yes, but not impossible. Hatch identified two big hurdles that companies face and must overcome to develop responsible AI globally.

Inclusion Concepts Differ Worldwide
The way inclusion is understood and implemented varies across different countries and cultures. This variation is influenced by different levels and forms of privilege present in those societies. This diversity means that a one-size-fits-all approach is difficult to implement.  Instead, companies must navigate through cultural norms, languages, and values to create truly inclusive systems. Failure to address these nuances can lead to AI solutions that marginalize or exclude certain groups, undermining the technology and company values. We saw this firsthand while investigating the intricacies of inclusion and diversity and studying differences in perceptions across Latin America, Europe and Northeast Asia for a multinational tech company.  The study highlighted how different socio-cultural backgrounds give rise to different meanings of inclusion and create unique "privilege scales" shaped by racial, ethnic, physical and social norms.

Local Biases, Global Consequences
It’s important to train AI systems on local datasets to improve their capabilities in serving diverse user bases.  But in some cases, these differences can manifest themselves as biases, even when the AI isn’t overtrained. When deployed globally, these biases are often amplified and exaggerated, distorting the AI’s decision-making processes and critical thinking — leading to erosion of trust and credibility. We saw how bias in content can cause discomfort in users when working on an AI-powered English learning tool that was designed to help users in India and Indonesia learn English. However, designed with a Western perspective, the tool referenced pork and beef in its exercise content which clashed with Hindu and Islamic dietary practices and belief systems.  This is a great example of why we need cultural sensitivity in developing educational solutions for diverse populations.   There are no simple solutions or shortcuts to building inclusivity in our AI systems. However, companies can and should adopt empathetic design practices, combining them with a robust global data strategy and a “human-in-the-loop” approach. All of this goes a long way in ensuring universal accessibility and usability — granted it’s done at the start of the project and not halfway through as an afterthought.

Making AI Accountable — Globally

Accountability is an essential part of building ethical AI. As much trust as we have in our solutions, AI systems cannot be left to self-govern, some degree of human governance is required. But similar to inclusivity, holding people accountable becomes significantly more challenging when deploying at a global level. Still, governance must prevail over personal beliefs Without clear governance structures, oversight mechanisms and accountability practices that transcend individual beliefs, AI decision-making goes unchecked, potentially leading to a snowball effect of bad takes. Once again, there is no simple solution.  Instead, companies need to prioritize local, cultural and ethical standards over simplistic (or potentially biased) assumptions to maintain trust globally. These standards must be a core part of the overall accountability strategies and policies and be enforced without compromise.

A Challenge Worth Undertaking…

There are many challenges to building responsible and equitable AI but perhaps the most important to understand and accept is that building an AI system isn’t solely about the technology but also about the value it provides to every user. A lot of the growth in AI is undoubtedly driven by the hype it carries but over time, the novelty will fade away. At that point, even the most cutting-edge artificial intelligence won’t be viable if it fails to be accepted by the people. This is especially true for AI systems that need to be deployed across cultures where the impact of the same technology can vary wildly, based on the diverse cultural nuances. This is to say that the challenges of designing ethical AI systems that are fair, inclusive and accountable are made more difficult when developers are tasked with catering to a diverse user base.   But not impossible. With a rich understanding of local insights, laws and customs, that journey becomes significantly easier. As we continue this series on the global responsible AI framework, we’ll explore more principles and action items that companies can adopt to translate cultural nuances into strategies for holistic AI development.

Seeing the emergence of AI systems in commercial and public solutions, Microsoft established an advisory committee called Aether in 2017. This committee conducted research into AI technologies, processes, and best practices for developing ethical and responsible AI systems. Hatch Studio built upon this framework, expanding it to account for the cultural nuances that AI systems will inevitably face when launched globally. In other words,global responsible AI is an approach to developing AI systems responsibly and in ways that warrant people’s trust worldwide. In each article in this series, we’re looking at the complex pillars upholding this framework, from a global lens. And we’re kicking things off with a long-standing pillar of any good technology — ethics.

Making AI Inclusive — Globally 

Building inclusivity in AI systems means building for people of all abilities. Taking into account potential barriers, unique experiences, and cultural nuances to ensure that our AI does not unintentionally exclude people from using the product or service to its fullest. But on a global stage, AI systems have to address exponentially more people that are exponentially more diverse. This presents unique challenges to maintaining inclusiveness in global AI systems — it’s difficult yes, but not impossible. Hatch identified two big hurdles that companies face and must overcome to develop responsible AI globally.

Inclusion Concepts Differ Worldwide
The way inclusion is understood and implemented varies across different countries and cultures. This variation is influenced by different levels and forms of privilege present in those societies. This diversity means that a one-size-fits-all approach is difficult to implement.  Instead, companies must navigate through cultural norms, languages, and values to create truly inclusive systems. Failure to address these nuances can lead to AI solutions that marginalize or exclude certain groups, undermining the technology and company values. We saw this firsthand while investigating the intricacies of inclusion and diversity and studying differences in perceptions across Latin America, Europe and Northeast Asia for a multinational tech company.  The study highlighted how different socio-cultural backgrounds give rise to different meanings of inclusion and create unique "privilege scales" shaped by racial, ethnic, physical and social norms.

Local Biases, Global Consequences
It’s important to train AI systems on local datasets to improve their capabilities in serving diverse user bases.  But in some cases, these differences can manifest themselves as biases, even when the AI isn’t overtrained. When deployed globally, these biases are often amplified and exaggerated, distorting the AI’s decision-making processes and critical thinking — leading to erosion of trust and credibility. We saw how bias in content can cause discomfort in users when working on an AI-powered English learning tool that was designed to help users in India and Indonesia learn English. However, designed with a Western perspective, the tool referenced pork and beef in its exercise content which clashed with Hindu and Islamic dietary practices and belief systems.  This is a great example of why we need cultural sensitivity in developing educational solutions for diverse populations.   There are no simple solutions or shortcuts to building inclusivity in our AI systems. However, companies can and should adopt empathetic design practices, combining them with a robust global data strategy and a “human-in-the-loop” approach. All of this goes a long way in ensuring universal accessibility and usability — granted it’s done at the start of the project and not halfway through as an afterthought.

Making AI Accountable — Globally

Accountability is an essential part of building ethical AI. As much trust as we have in our solutions, AI systems cannot be left to self-govern, some degree of human governance is required. But similar to inclusivity, holding people accountable becomes significantly more challenging when deploying at a global level. Still, governance must prevail over personal beliefs Without clear governance structures, oversight mechanisms and accountability practices that transcend individual beliefs, AI decision-making goes unchecked, potentially leading to a snowball effect of bad takes. Once again, there is no simple solution.  Instead, companies need to prioritize local, cultural and ethical standards over simplistic (or potentially biased) assumptions to maintain trust globally. These standards must be a core part of the overall accountability strategies and policies and be enforced without compromise.

A Challenge Worth Undertaking…

There are many challenges to building responsible and equitable AI but perhaps the most important to understand and accept is that building an AI system isn’t solely about the technology but also about the value it provides to every user. A lot of the growth in AI is undoubtedly driven by the hype it carries but over time, the novelty will fade away. At that point, even the most cutting-edge artificial intelligence won’t be viable if it fails to be accepted by the people. This is especially true for AI systems that need to be deployed across cultures where the impact of the same technology can vary wildly, based on the diverse cultural nuances. This is to say that the challenges of designing ethical AI systems that are fair, inclusive and accountable are made more difficult when developers are tasked with catering to a diverse user base.   But not impossible. With a rich understanding of local insights, laws and customs, that journey becomes significantly easier. As we continue this series on the global responsible AI framework, we’ll explore more principles and action items that companies can adopt to translate cultural nuances into strategies for holistic AI development.

Share your thoughts

Let us know

Share your thoughts

Let us know

Related articles

Global Responsible AI: Fairness, Transparency, Privacy & Security
March 19, 2024 | 6-7 Minutes read
This is part two of a series of articles where we take an in-depth look into the Responsible AI Framework and share our insights from 15 international projects involving responsible AI. Originally developed by Microsoft in 2017, Hatch Studios adapted the responsible AI framework for 3 years to help companies develop responsible AI systems that work across countries, cultures, and languages.
Global Responsible AI: Inclusiveness and Accountability
March 19, 2024 | 7-8 Minutes read
This is part one of a series of articles where we take an in-depth look into the Responsible AI Framework and share our insights from 15 international projects in the last 3 years that included the use of responsible AI. Originally developed by Microsoft in 2017, Hatch Studios adapted the responsible AI framework to help companies develop responsible AI systems that work across countries, cultures, and languages.