How Do We Build Trust in Artificial Intelligence? The Pros and Cons of Explainable AI

Making AI more transparent could be the key to its wider adoption.

The biggest tech trend of the year, Artificial Intelligence (AI) is becoming increasingly sophisticated and more widespread. While already used across a number of industry sectors, AI still has not yet reached its full potential and offers huge opportunities for businesses of all sizes. In fact, AI could contribute up to $15.7 trillion to the global economy by 2030, according to a report from PwC.

However, the more that AI is incorporated into businesses and into our daily lives, the greater the need to understand it. While headlines involving nightmare scenarios where killer robots and autonomous weapons turn on humans may be somewhat alarmist, the issue of trust is a primary concern for the future of AI. Some 67 per cent of business leaders believe that AI and automation will have a negative impact on stakeholder trust levels in their industry within the next five years, according to a PwC survey.

“Also known as XAI or Transparent AI, Explainable AI describes a system where the actions of an AI can be easily understood by humans”

This highlights the need for greater transparency around the technology and that’s where Explainable AI can help. Also known as XAI or Transparent AI, Explainable AI describes a system where the actions of an AI can be easily understood by humans. The aim is to understand exactly how and why AI is making certain decisions. Undoubtedly a step in the right direction, Explainable AI makes the technology more transparent.

In many AI-based systems, end users don’t know how or why a decision has been reached. Not only does this ‘black box’ approach do little to foster trust, it could also make it difficult or even impossible to comply with privacy regulations such as GDPR. Under such laws, businesses need AI that makes explainable decisions that can be reproduced. Companies may also be obliged to reveal where they got the data that drives their AI. On a wider scale, this encourages good practice, offering the opportunity to make decision-making systems more accountable. Explainable AI is also important for detecting flaws and biases in data that could lead to incorrect or unfair decisions.

Fostering more trust around the use of AI is essential. Only if there is confidence in the technology can it be deployed widely across an organisation. But while Explainable AI may be a sensible goal, there are significant challenges involved.

Context will be an important consideration – for example, understanding the algorithms behind the TV and film recommendations from Netflix* isn’t vital for most people as the impact is minimal. However, understanding AI in situations where the impact is much more serious is of greater concern. Making diagnostic decisions in healthcare, or taking military actions using an AI-based system requires a detailed understanding of exactly how a decision has been made. That’s why the degree to which a company has to explain the working behind its AI may well depend on the seriousness of the consequences and the levels of autonomy involved.

“There is an opportunity to make our decision-making more systematic and accountable,” said Casimir Wierzynski, Senior Director, Office of the CTO, Artificial Intelligence Products Group at Intel in a blog post. “Many issues around explainability are social policy questions: What are the qualities of decision making that we want?

“To engineers, these can be translated into system requirements that can be designed, measured, and continuously tested. They will depend on the domain where they are applied — tagging vacation photos has different requirements compared to analysing medical images.

“But as we rely more on automated systems for making decisions, we have an unprecedented opportunity to be more explicit and systematic about the values that guide how we decide,” he concludes.

However, the impact of Explainable AI may not always be positive. If companies are compelled to reveal the inner workings of their technology, this could mean that they effectively have to give away their ideas and IP (Intellectual Property). What’s more, there are questions over what exactly we mean by ‘explainable’. Many AI algorithms are beyond human comprehension, and certainly well beyond the grasp of most end users, so an explanation is unlikely to make any sense to them.

What’s more, there is a trade-off between performance and explainability. If every step in an AI system has to be documented and explained, the process will inevitably become slower. Not only could this hinder the speed of the system and limit its applications, it could also stifle further innovation.

Businesses are set to face increasing pressure to implement transparent AI systems in the coming months and years. Explainability will be key to building the trust needed for the wider adoption of AI. However, it may take some time to reach a widely agreed definition of exactly what Explainable AI should involve.

*Other names and brands may be claimed as the property of others