Global Responsible AI: Reliability and Safety

This is part three of a series of articles where we take an in-depth look into the Responsible AI Framework and share our insights from 15 international projects involving responsible AI. Originally developed by Microsoft in 2017, Hatch Studio adapted the responsible AI framework for 3 years to help companies develop responsible AI systems that work across countries, cultures, and languages.

July 25, 2024
6-7 Minutes read

AI
Key Takeaways:
  • Reliability and safety are two of the most important parts of building trust in AI systems, but trust itself can mean slightly different things in different cultures.
  • The right approach to reliability and safety involves using Emotional Intelligence to build trust, that is, giving AI the ability to interpret cultural and social cues to align with local customs and expectations.
In the previous two articles, we talked about the principles that make AI systems explainable and ethical — in this article, we’re talking about the two principles that bridge the gap between the two and make AI predictable. We’re talking about building reliability and safety into AI.
Image made by AI
What is a Reliable AI System?

Unlike reliability in other machines and appliances, reliability in AI systems does not refer to preventing physical failures or software outages — it refers to the AI consistently performing its intended functions accurately and efficiently. It produces consistent results under varying conditions and adapts to new data and environments while maintaining performance to a satisfactory and expected degree. The word “expected” is key here — AI systems must not act unexpectedly, irrespective of the data it’s provided with.

Trust in Global Contexts Differ

At the end of the day, both reliability and safety serve one purpose — building trust in our AI system for the end user. But for AI systems deployed internationally, trust is rarely a static concept. The way users will trust an AI system is a guiding principle for development but what fosters trust in one cultural setting might not work in another, or worse, even backfire completely. Therefore, trust-building strategies must be culturally sensitive and adaptable — but that itself is a challenge.
A common strategy for building trust is to create anthropomorphic AI systems, that is, AI systems that behave like humans do. These systems are most common in healthcare applications where companionship benefits users greatly. But although a very effective strategy, itmwon’t necessarily be equally effective in every part of the world. Some cultures may prefer AI systems that exhibit human-like characteristics for companionship, while others might find this intrusive or unsettling.

The Right Approach to Trust-Building Strategies

There are several trust-building strategies, each with its own set of advantages, disadvantages and applications. We can’t tell you which one will be the best for your specific use case. However, what we can share is an approach that we’ve learned and repeatedly tested over the past three years — and that is,prioritizing Emotional Intelligence to build trust Building Emotional Intelligence into AI systems gives them the ability to interpret cultural and social cues, adapting responses to align with local customs and expectations. For instance, when working on an AI tool for Veterans Mental Health Training we saw the lack of EI in existing AI systems and the need for anthropomorphic qualities (such as empathy, vulnerability, and personal comfort or discomfort that can come up in sensitive use cases). Different people process emotions differently and AI systems need to be able to do the same. Without that, responses feel unnatural, calculated and dishonest — the exact opposite of what you need to build trust.

Why the Global Responsible AI Framework Exists

As we put together the final piece in the responsible AI puzzle, the value of a cohesive and unified framework should become apparent. Whether you find the framework enlightening or cautiously vague, it serves as a foundation to guide human ingenuity and critical thinking in an ever-evolving field. Artificial intelligence is rapidly advancing, making it impractical to establish stringent standards. Instead, this framework is designed to evolve alongside technological progress and human adaptation. It's the product of years of research and real-world applications — enabling companies to bridge the gap between the cutting edge of technology and equitable outcomes that positively impact the communities around us. People's perceptions, feelings and expectations about AI will continue to evolve as they become more accustomed to these tools. Moving forward, organizations need to embrace the paradigm shift and invest in continuous evolution, better human integration, and ongoing global research to build responsible and purposeful AI systems that prioritize human values and needs.
As AI becomes a bigger part of daily lives, people’s perceptions, feelings and expectations will naturally evolve. To succeed, organizations must embrace this paradigm shift and invest in continuous evolution, better human integration and ongoing global research. This will be the first step in building responsible and purposeful AI systems that prioritize human values and needs.

Unlike reliability in other machines and appliances, reliability in AI systems does not refer to preventing physical failures or software outages — it refers to the AI consistently performing its intended functions accurately and efficiently. It produces consistent results under varying conditions and adapts to new data and environments while maintaining performance to a satisfactory and expected degree. The word “expected” is key here — AI systems must not act unexpectedly, irrespective of the data it’s provided with.

Trust in Global Contexts Differ

At the end of the day, both reliability and safety serve one purpose — building trust in our AI system for the end user. But for AI systems deployed internationally, trust is rarely a static concept. The way users will trust an AI system is a guiding principle for development but what fosters trust in one cultural setting might not work in another, or worse, even backfire completely. Therefore, trust-building strategies must be culturally sensitive and adaptable — but that itself is a challenge.
A common strategy for building trust is to create anthropomorphic AI systems, that is, AI systems that behave like humans do. These systems are most common in healthcare applications where companionship benefits users greatly. But although a very effective strategy, itmwon’t necessarily be equally effective in every part of the world. Some cultures may prefer AI systems that exhibit human-like characteristics for companionship, while others might find this intrusive or unsettling.

The Right Approach to Trust-Building Strategies

There are several trust-building strategies, each with its own set of advantages, disadvantages and applications. We can’t tell you which one will be the best for your specific use case. However, what we can share is an approach that we’ve learned and repeatedly tested over the past three years — and that is,prioritizing Emotional Intelligence to build trust Building Emotional Intelligence into AI systems gives them the ability to interpret cultural and social cues, adapting responses to align with local customs and expectations. For instance, when working on an AI tool for Veterans Mental Health Training we saw the lack of EI in existing AI systems and the need for anthropomorphic qualities (such as empathy, vulnerability, and personal comfort or discomfort that can come up in sensitive use cases). Different people process emotions differently and AI systems need to be able to do the same. Without that, responses feel unnatural, calculated and dishonest — the exact opposite of what you need to build trust.

Why the Global Responsible AI Framework Exists

As we put together the final piece in the responsible AI puzzle, the value of a cohesive and unified framework should become apparent. Whether you find the framework enlightening or cautiously vague, it serves as a foundation to guide human ingenuity and critical thinking in an ever-evolving field. Artificial intelligence is rapidly advancing, making it impractical to establish stringent standards. Instead, this framework is designed to evolve alongside technological progress and human adaptation. It's the product of years of research and real-world applications — enabling companies to bridge the gap between the cutting edge of technology and equitable outcomes that positively impact the communities around us. People's perceptions, feelings and expectations about AI will continue to evolve as they become more accustomed to these tools. Moving forward, organizations need to embrace the paradigm shift and invest in continuous evolution, better human integration, and ongoing global research to build responsible and purposeful AI systems that prioritize human values and needs.
As AI becomes a bigger part of daily lives, people’s perceptions, feelings and expectations will naturally evolve. To succeed, organizations must embrace this paradigm shift and invest in continuous evolution, better human integration and ongoing global research. This will be the first step in building responsible and purposeful AI systems that prioritize human values and needs.

Share your thoughts

Let us know

Share your thoughts

Let us know

Related articles

Global Responsible AI: Fairness, Transparency, Privacy & Security
March 19, 2024 | 6-7 Minutes read
This is part two of a series of articles where we take an in-depth look into the Responsible AI Framework and share our insights from 15 international projects involving responsible AI. Originally developed by Microsoft in 2017, Hatch Studios adapted the responsible AI framework for 3 years to help companies develop responsible AI systems that work across countries, cultures, and languages.
Global Responsible AI: Inclusiveness and Accountability
March 19, 2024 | 7-8 Minutes read
This is part one of a series of articles where we take an in-depth look into the Responsible AI Framework and share our insights from 15 international projects in the last 3 years that included the use of responsible AI. Originally developed by Microsoft in 2017, Hatch Studios adapted the responsible AI framework to help companies develop responsible AI systems that work across countries, cultures, and languages.
Privacy Policies That Keep Everyone Happy. Do They Exist?
March 19, 2024 | 5 Minutes read
Personalization has been the single biggest product differentiator for more than a decade now and will continue to be so for most industries. But privacy as a differentiator isn’t far behind. Companies can set themselves apart by taking a privacy-by-design approach, being a leader in data protection and prioritizing users’ privacy. Effective communication also goes a long way in building trust and customers do respond positively to awareness campaigns even when they only point out the basics. This is becoming even more important now that an increasing number of consumers wish for personalization and privacy to coexist together. In the same vein, privacy is a differentiation feature that an increasing number of consumers are willing to pay for. Recent studies show that consumers are willing to pay as much as 25 percent more for privacy-focused features. In fact, more than 40 percent of companies are seeing benefits at least twice that of their privacy spend. This makes one thing clear – consumers are far more likely to use and support services that help users feel cared for, secure and in control of their data.