Global Responsible AI: Reliability and Safety
This is part three of a series of articles where we take an in-depth look into the Responsible AI Framework and share our insights from 15 international projects involving responsible AI. Originally developed by Microsoft in 2017, Hatch Studio adapted the responsible AI framework for 3 years to help companies develop responsible AI systems that work across countries, cultures, and languages.
What is a Reliable AI System?
Unlike reliability in other machines and appliances, reliability in AI systems does not refer to preventing physical failures or software outages — it refers to the AI consistently performing its intended functions accurately and efficiently. It produces consistent results under varying conditions and adapts to new data and environments while maintaining performance to a satisfactory and expected degree.
The word “expected” is key here — AI systems must not act unexpectedly, irrespective of the data it’s provided with.
As AI becomes a bigger part of daily lives, people’s perceptions, feelings and expectations will naturally evolve. To succeed, organizations must embrace this paradigm shift and invest in continuous evolution, better human integration and ongoing global research. This will be the first step in building responsible and purposeful AI systems that prioritize human values and needs.
Trust in Global Contexts Differ
At the end of the day, both reliability and safety serve one purpose — building trust in our AI system for the end user. But for AI systems deployed internationally, trust is rarely a static concept. The way users will trust an AI system is a guiding principle for development but what fosters trust in one cultural setting might not work in another, or worse, even backfire completely. Therefore, trust-building strategies must be culturally sensitive and adaptable — but that itself is a challenge. A common strategy for building trust is to create anthropomorphic AI systems, that is, AI systems that behave like humans do. These systems are most common in healthcare applications where companionship benefits users greatly. But although a very effective strategy, itmwon’t necessarily be equally effective in every part of the world. Some cultures may prefer AI systems that exhibit human-like characteristics for companionship, while others might find this intrusive or unsettling.The Right Approach to Trust-Building Strategies
There are several trust-building strategies, each with its own set of advantages, disadvantages and applications. We can’t tell you which one will be the best for your specific use case. However, what we can share is an approach that we’ve learned and repeatedly tested over the past three years — and that is,prioritizing Emotional Intelligence to build trust Building Emotional Intelligence into AI systems gives them the ability to interpret cultural and social cues, adapting responses to align with local customs and expectations. For instance, when working on an AI tool for Veterans Mental Health Training we saw the lack of EI in existing AI systems and the need for anthropomorphic qualities (such as empathy, vulnerability, and personal comfort or discomfort that can come up in sensitive use cases). Different people process emotions differently and AI systems need to be able to do the same. Without that, responses feel unnatural, calculated and dishonest — the exact opposite of what you need to build trust.Why the Global Responsible AI Framework Exists
As we put together the final piece in the responsible AI puzzle, the value of a cohesive and unified framework should become apparent. Whether you find the framework enlightening or cautiously vague, it serves as a foundation to guide human ingenuity and critical thinking in an ever-evolving field. Artificial intelligence is rapidly advancing, making it impractical to establish stringent standards. Instead, this framework is designed to evolve alongside technological progress and human adaptation. It's the product of years of research and real-world applications — enabling companies to bridge the gap between the cutting edge of technology and equitable outcomes that positively impact the communities around us. People's perceptions, feelings and expectations about AI will continue to evolve as they become more accustomed to these tools. Moving forward, organizations need to embrace the paradigm shift and invest in continuous evolution, better human integration, and ongoing global research to build responsible and purposeful AI systems that prioritize human values and needs.As AI becomes a bigger part of daily lives, people’s perceptions, feelings and expectations will naturally evolve. To succeed, organizations must embrace this paradigm shift and invest in continuous evolution, better human integration and ongoing global research. This will be the first step in building responsible and purposeful AI systems that prioritize human values and needs.
Unlike reliability in other machines and appliances, reliability in AI systems does not refer to preventing physical failures or software outages — it refers to the AI consistently performing its intended functions accurately and efficiently. It produces consistent results under varying conditions and adapts to new data and environments while maintaining performance to a satisfactory and expected degree.
The word “expected” is key here — AI systems must not act unexpectedly, irrespective of the data it’s provided with.
As AI becomes a bigger part of daily lives, people’s perceptions, feelings and expectations will naturally evolve. To succeed, organizations must embrace this paradigm shift and invest in continuous evolution, better human integration and ongoing global research. This will be the first step in building responsible and purposeful AI systems that prioritize human values and needs.
Trust in Global Contexts Differ
At the end of the day, both reliability and safety serve one purpose — building trust in our AI system for the end user. But for AI systems deployed internationally, trust is rarely a static concept. The way users will trust an AI system is a guiding principle for development but what fosters trust in one cultural setting might not work in another, or worse, even backfire completely. Therefore, trust-building strategies must be culturally sensitive and adaptable — but that itself is a challenge. A common strategy for building trust is to create anthropomorphic AI systems, that is, AI systems that behave like humans do. These systems are most common in healthcare applications where companionship benefits users greatly. But although a very effective strategy, itmwon’t necessarily be equally effective in every part of the world. Some cultures may prefer AI systems that exhibit human-like characteristics for companionship, while others might find this intrusive or unsettling.The Right Approach to Trust-Building Strategies
There are several trust-building strategies, each with its own set of advantages, disadvantages and applications. We can’t tell you which one will be the best for your specific use case. However, what we can share is an approach that we’ve learned and repeatedly tested over the past three years — and that is,prioritizing Emotional Intelligence to build trust Building Emotional Intelligence into AI systems gives them the ability to interpret cultural and social cues, adapting responses to align with local customs and expectations. For instance, when working on an AI tool for Veterans Mental Health Training we saw the lack of EI in existing AI systems and the need for anthropomorphic qualities (such as empathy, vulnerability, and personal comfort or discomfort that can come up in sensitive use cases). Different people process emotions differently and AI systems need to be able to do the same. Without that, responses feel unnatural, calculated and dishonest — the exact opposite of what you need to build trust.Why the Global Responsible AI Framework Exists
As we put together the final piece in the responsible AI puzzle, the value of a cohesive and unified framework should become apparent. Whether you find the framework enlightening or cautiously vague, it serves as a foundation to guide human ingenuity and critical thinking in an ever-evolving field. Artificial intelligence is rapidly advancing, making it impractical to establish stringent standards. Instead, this framework is designed to evolve alongside technological progress and human adaptation. It's the product of years of research and real-world applications — enabling companies to bridge the gap between the cutting edge of technology and equitable outcomes that positively impact the communities around us. People's perceptions, feelings and expectations about AI will continue to evolve as they become more accustomed to these tools. Moving forward, organizations need to embrace the paradigm shift and invest in continuous evolution, better human integration, and ongoing global research to build responsible and purposeful AI systems that prioritize human values and needs.As AI becomes a bigger part of daily lives, people’s perceptions, feelings and expectations will naturally evolve. To succeed, organizations must embrace this paradigm shift and invest in continuous evolution, better human integration and ongoing global research. This will be the first step in building responsible and purposeful AI systems that prioritize human values and needs.