Can machines think? That was the question being asked in the 50s. In the years after that, the question is no longer whether or not machines can think. We are now trying to understand how machines think.
That is ordinarily odd, seeing as humans build the machines, but it is what it is. Humans built a machine that could reach conclusions independently, and now humans are trying to understand the machine’s thought process.
Currently, machines are being relied upon more and more in our society. Algorithms make decisions about a wide array of subjects, from law to e-commerce platforms to media. Some programs backed by Artificial intelligence have attained the performance levels of human experts.
Even though humans created it, it is often the case that even humans are hard-pressed to understand how the algorithms come to their conclusions, in the now comical case of the amazon pricing war over Peter Lawrence’s The making of a fly. It was worked out that once a day, one seller (profnath) set its price to be 0.9983 times the price of the copy offered by another seller (bordeebook). Meanwhile, the prices of bordeebook were set to rise at 1.270589 times the price shown by profnath. The book about flies peaked at $23,698,655.93 before the algorithm was assessed. We must understand just how these machines come to their conclusions.
To explain our daily lives, we rely on rich and expressive vocabulary: we use examples and counterexamples, create rules and prototypes, and highlight important characteristics that are present and absent. When interacting with algorithmic decisions, users will expect and demand the same level of expressiveness from AI. This is where explainable Artificial Intelligence comes in.
Explainable Artificial Intelligence
Explainable artificial intelligence (XAI) is a set of processes and methods that enables human users to comprehend and trust the results and output created by machine learning algorithms. It is a relatively new concept focused on helping humans interpret the output of sophisticated machine learning models.
As AI becomes more advanced, humans are challenged to comprehend and retrace how the algorithm produces results. The whole calculation process is turned into what is commonly referred to as a “black box” that is impossible to interpret.
These black-box models are created directly from the data. And, not even the engineers or data scientists who make the algorithm can understand or explain what is happening inside them or how the AI algorithm arrived at a specific result.
Several experts have simplified Explainable Artificial Intelligence:
“The term ‘explainable AI’ or ‘interpretable AI’ refers to humans being able to easily comprehend through dynamically generated graphs or textual descriptions the path artificial intelligence technology took to make a decision.” –Keith Collins, executive vice president, and CIO, SAS.
“Explainable AI is where we can interpret the outcomes of AI while being able to traverse back, from outcomes to the inputs, on the path the AI took to arrive at the results.” –Phani Nagarjuna, chief analytics officer, Sutherland.
“Explainable AI in simple terms means AI that is transparent in its operations so that human users will be able to understand and trust decisions. Organizations must ask the question – can you explain how your AI generated that specific insight or decision?” –Matt Sanchez, founder, and CTO of CognitiveScale.
But why exactly is explainable AI important?
As critical as they are to the success of our society, AI systems are notorious for their ‘black-box’ nature, leaving many users without visibility into how or why decisions have been made.
Heena Purohit, senior product manager at IBM Watson IoT, notes that AI and machine learning already do a great job processing vast amounts of data in an often complex fashion. But the goal of AI and ML, Purohit says, is to help people be more productive and make smarter, faster decisions – which is much more complicated if people have no idea why they’re making those decisions.
Explainable AI is, in a sense, about getting people to trust and buy into these new systems and how they’re changing the way we work.
“As the purpose of the AI is to help humans make enhanced decisions, the business realizes the true value of the AI solution when the user changes his behavior or takes action based on the AI output [or] prediction,” Purohit says. “However, to get a user to change his behavior, he will have to trust the system’s suggestions. This trust is built when users can feel empowered and know how the AI system came up with the recommendation [or] output.”
There must be the assurance that business outcomes left in the hands of AI are understandable and auditable. In many industries, explainability is often a regulatory requirement for companies employing such models.
Conclusion
With the advent of XAI, we might be one step closer to making machines accountable for their actions in the same manner that humans are.
MindsDB believes that any ML generated prediction used to support the decision-making process must also answer three questions:
- Why can I trust the prediction?
- Why did the model provide this prediction?
- How can I make these predictions more reliable?
Without explanations behind an AI model’s internal functionalities and its decisions, there is a risk that the model would not be considered trustworthy or legitimate. XAI provides the needed understandability and transparency to enable greater trust toward AI-based solutions.
If critical decisions are to be left in the hands of machines, it is only sensible that humans understand how these machines come to the conclusions that they come to. Much like a judge in the courtroom has to provide the laws and precedents that enabled them to go to the decisions they did, or a mathematician is required to show the work that got them the final result, so is explainable AI.