Global Responsible AI: Fairness, Transparency, Privacy & Security

This is part two of a series of articles where we take an in-depth look into the Responsible AI Framework and share our insights from 15 international projects involving responsible AI. Originally developed by Microsoft in 2017, Hatch Studios adapted the responsible AI framework for 3 years to help companies develop responsible AI systems that work across countries, cultures, and languages.

March 19, 2024
6-7 Minutes read

AI
Key Takeaways:
  • Fairness is a critical component of AI’s decision making but what constitutes fairness can be different across cultures and demographics.
  • All users want transparency in how AI systems make decisions but not everyone wants the same amount of information.
  • Cultural values shape how people view privacy but everyone wants confidentiality around personal information, at the very least.
Why Must AI Be Explainable?

In the previous edition of this series, we addressed inclusivity and accountability — two pillars of the global responsible AI framework that makes AI systems ethical. Today, we’re looking at the principles that make AI explainable — fairness, transparency and privacy. Artificial intelligence and machine learning, in general, make their own decisions and have a codified thought process that may be random by design. Yes, humans created these technologies but that does not mean we interpret and explain every decision an AI system makes (or most, even). Even today, many ML models are considered “black boxes” and impossible to fully explain — neural networks in particular are notoriously difficult to understand. But a lack of understanding is a breeding ground for biases, dangerous assumptions, and performance drift. From a designer’s perspective, this makes explainability invaluable.   But what about from a user’s perspective? Equally critical. Users want to know what data this model is based on, how AI systems arrive at their results, and whether or not those answers are correct. When users can understand (or at least, feel like they understand) the technologies they use, they feel comfortable, confident, and more likely to engage with them.

The Three Slopes on an Uphill Battle

AI systems are innately complex, especially so for non-tech users (which make up the vast majority of the general population), and that makes getting the “explainability” right an uphill battle. But there’s more.  Each of the three principles: transparency, fairness, and privacy & security present their own set of challenges. At Hatch, we’ve applied the Global Responsible AI Framework on unique AI projects and had the opportunity to study these challenges up close and on a global scale — here is what we learned (and what you should apply to your AI systems):

1. Fairness: Default is not fair for everyone.
Before we can build fairness into our AI system, we need to qualify it with a clear and objective meaning. Unfortunately, fairness can mean different things to different people and varies across cultures and contexts. What’s fair to one group might feel unnecessary or insufficient to another. Every software and tool ships with certain default values — naturally, these defaults are often based on assumptions that cover a wide variety of use cases and demographics — but may not inherently be bias-free. In an AI system, these default settings dictate the decision-making process and can thus inadvertently favor certain groups while disadvantaging others. We saw this while working on a visual discovery platform used globally by millions of people every day. However, we noticed that the default search results for users in Latam, Northeast Asia, and Europe didn’t fully meet their expectations of fairness. 
Action: Shift your focus away from a universally accepted standard of fairness because it does not exist. Instead, AI should reflect the diversity of the users, in all of the different regions it’s intended for. Imposing a one-size-fits-all solution is ineffective and inefficient.

2. Transparency: Users want different levels of information.
When users can’t see or understand how AI makes decisions, they are less likely to trust and use the technology. At the same time, users want to be able to customize transparency based on the service offered.  For instance, while conducting initial research for an AI-powered carbon credit calculator, we discovered that users prefer the option to adjust variables and manually account for the significant differences in forests worldwide over solely relying on opaque AI algorithms.
Action: Transparency in AI is as much about allowing users to control and modify AI behavior as it is about openness. Companies need to educate users on AI functions and give them the ability to adjust various AI behaviors and parameters to suit preferences. The range of customizability will depend on the local preferences of the target audiences.

3. Privacy & Security: Cultural values shape AI privacy but everyone wants confidentiality.
Most people know AI requires datasets for training, but many also know that more often than not, it’s our own data (albeit anonymized). When implementing an AI solution, you may seek more information to create better systems, but not everyone will be willing to provide it. 
Action: companies must understand local contexts to ensure a fair exchange. Start with a basic truth — everyone wants confidentiality in personal and delicate matters. From there, find a balance between data requirements (the data your AI needs to function) and local preferences (what data the user is open to sharing), particularly when handling culturally sensitive information.

Preparing For The Future

From a societal perspective, all software should be ethical, private and secure. However, there is an increasing need for explainable AI, not just for ethical reasons but also for business longevity. AI cannot be explainable and by extension, trustworthy until it is fair, transparent and secure.  As more and more AI systems begin to enter the market, consumers will have greater freedom to choose the solution that respects their privacy and rewards customer loyalty with transparent and fair solutions. We’re already seeing this happening right now with the rise of privacy-focused alternatives to long-standing behemoths. In addition to this, there are growing concerns about the increasing possibility of deliberate data poisoning, which may not be caught in time without rigorous testing early in the training phase. This is another reason for safety to be built into AI right from the beginning. The global responsible AI instills the values that help companies prepare for the challenges that are coming and does so at a global scale.

In the previous edition of this series, we addressed inclusivity and accountability — two pillars of the global responsible AI framework that makes AI systems ethical. Today, we’re looking at the principles that make AI explainable — fairness, transparency and privacy. Artificial intelligence and machine learning, in general, make their own decisions and have a codified thought process that may be random by design. Yes, humans created these technologies but that does not mean we interpret and explain every decision an AI system makes (or most, even). Even today, many ML models are considered “black boxes” and impossible to fully explain — neural networks in particular are notoriously difficult to understand. But a lack of understanding is a breeding ground for biases, dangerous assumptions, and performance drift. From a designer’s perspective, this makes explainability invaluable.   But what about from a user’s perspective? Equally critical. Users want to know what data this model is based on, how AI systems arrive at their results, and whether or not those answers are correct. When users can understand (or at least, feel like they understand) the technologies they use, they feel comfortable, confident, and more likely to engage with them.

The Three Slopes on an Uphill Battle

AI systems are innately complex, especially so for non-tech users (which make up the vast majority of the general population), and that makes getting the “explainability” right an uphill battle. But there’s more.  Each of the three principles: transparency, fairness, and privacy & security present their own set of challenges. At Hatch, we’ve applied the Global Responsible AI Framework on unique AI projects and had the opportunity to study these challenges up close and on a global scale — here is what we learned (and what you should apply to your AI systems):

1. Fairness: Default is not fair for everyone.
Before we can build fairness into our AI system, we need to qualify it with a clear and objective meaning. Unfortunately, fairness can mean different things to different people and varies across cultures and contexts. What’s fair to one group might feel unnecessary or insufficient to another. Every software and tool ships with certain default values — naturally, these defaults are often based on assumptions that cover a wide variety of use cases and demographics — but may not inherently be bias-free. In an AI system, these default settings dictate the decision-making process and can thus inadvertently favor certain groups while disadvantaging others. We saw this while working on a visual discovery platform used globally by millions of people every day. However, we noticed that the default search results for users in Latam, Northeast Asia, and Europe didn’t fully meet their expectations of fairness. 
Action: Shift your focus away from a universally accepted standard of fairness because it does not exist. Instead, AI should reflect the diversity of the users, in all of the different regions it’s intended for. Imposing a one-size-fits-all solution is ineffective and inefficient.

2. Transparency: Users want different levels of information.
When users can’t see or understand how AI makes decisions, they are less likely to trust and use the technology. At the same time, users want to be able to customize transparency based on the service offered.  For instance, while conducting initial research for an AI-powered carbon credit calculator, we discovered that users prefer the option to adjust variables and manually account for the significant differences in forests worldwide over solely relying on opaque AI algorithms.
Action: Transparency in AI is as much about allowing users to control and modify AI behavior as it is about openness. Companies need to educate users on AI functions and give them the ability to adjust various AI behaviors and parameters to suit preferences. The range of customizability will depend on the local preferences of the target audiences.

3. Privacy & Security: Cultural values shape AI privacy but everyone wants confidentiality.
Most people know AI requires datasets for training, but many also know that more often than not, it’s our own data (albeit anonymized). When implementing an AI solution, you may seek more information to create better systems, but not everyone will be willing to provide it. 
Action: companies must understand local contexts to ensure a fair exchange. Start with a basic truth — everyone wants confidentiality in personal and delicate matters. From there, find a balance between data requirements (the data your AI needs to function) and local preferences (what data the user is open to sharing), particularly when handling culturally sensitive information.

Preparing For The Future

From a societal perspective, all software should be ethical, private and secure. However, there is an increasing need for explainable AI, not just for ethical reasons but also for business longevity. AI cannot be explainable and by extension, trustworthy until it is fair, transparent and secure.  As more and more AI systems begin to enter the market, consumers will have greater freedom to choose the solution that respects their privacy and rewards customer loyalty with transparent and fair solutions. We’re already seeing this happening right now with the rise of privacy-focused alternatives to long-standing behemoths. In addition to this, there are growing concerns about the increasing possibility of deliberate data poisoning, which may not be caught in time without rigorous testing early in the training phase. This is another reason for safety to be built into AI right from the beginning. The global responsible AI instills the values that help companies prepare for the challenges that are coming and does so at a global scale.

Share your thoughts

Let us know

Share your thoughts

Let us know

Related articles

Privacy Policies That Keep Everyone Happy. Do They Exist?
March 19, 2024 | 5 Minutes read
Personalization has been the single biggest product differentiator for more than a decade now and will continue to be so for most industries. But privacy as a differentiator isn’t far behind. Companies can set themselves apart by taking a privacy-by-design approach, being a leader in data protection and prioritizing users’ privacy. Effective communication also goes a long way in building trust and customers do respond positively to awareness campaigns even when they only point out the basics. This is becoming even more important now that an increasing number of consumers wish for personalization and privacy to coexist together. In the same vein, privacy is a differentiation feature that an increasing number of consumers are willing to pay for. Recent studies show that consumers are willing to pay as much as 25 percent more for privacy-focused features. In fact, more than 40 percent of companies are seeing benefits at least twice that of their privacy spend. This makes one thing clear – consumers are far more likely to use and support services that help users feel cared for, secure and in control of their data.
Global Responsible AI: Reliability and Safety
July 25, 2024 | 6-7 Minutes read
This is part three of a series of articles where we take an in-depth look into the Responsible AI Framework and share our insights from 15 international projects involving responsible AI. Originally developed by Microsoft in 2017, Hatch Studio adapted the responsible AI framework for 3 years to help companies develop responsible AI systems that work across countries, cultures, and languages.
Global Responsible AI: Inclusiveness and Accountability
March 19, 2024 | 7-8 Minutes read
This is part one of a series of articles where we take an in-depth look into the Responsible AI Framework and share our insights from 15 international projects in the last 3 years that included the use of responsible AI. Originally developed by Microsoft in 2017, Hatch Studios adapted the responsible AI framework to help companies develop responsible AI systems that work across countries, cultures, and languages.