Explaining Explainable AI for Conversations

Explain explainable AI for conversations

In just two or three decades, artificial intelligence (AI) has left the pages of science fiction novels and become one of the cornerstone technologies of modern society. Success in machine learning (ML) has spawned a flood of new AI applications almost too numerous to count, from autonomous machines and biometrics to predictive analytics and chatbots.

One application of AI that has emerged in recent years has been Conversational Intelligence (CI). While automated chatbots and virtual assistants deal with human-to-computer interaction, CI aims to take a closer look at human-to-human interaction. The potential to monitor and extract data from human conversations, including tone, mood and context, has seemingly limitless potential.

For example, data could be generated and logged from call center interactions, with everything from speaker rating and customer satisfaction to call summaries and action items being automatically filed. This would drastically reduce the bureaucracy involved in handling call centers and give agents more time to talk to customers. Furthermore, the generated data could even be used to design employee training programs and even recognize and reward outstanding work.

But something is missing – trust. Using AI in this way is incredibly useful, but currently still requires a leap of faith from the companies that use it.

As a company and as a society as a whole, we place great trust in AI-based systems. Social media companies like Twitter are now using AI-based algorithms to curb hate speech and keep users safe online. Healthcare providers around the world are increasingly using AI, from chatbots that can triage patients to algorithms that can help pathologists make more accurate diagnoses. The UK government recently launched an AI tool called Connect to analyze tax records and uncover fraudulent activity. There are even examples of AI being used to improve law enforcement outcomes, using tools like facial recognition, crowd surveillance and gait analysis to identify suspects.

READ :  Virtual reality - the answer to tech recruiters' prayers?

We take that leap of faith in exchange for a more efficient, connected, and seamless world. This world is built on “Big Data” and we need AI to help us manage the flow of that data and put it to good use. This applies in the macroeconomic sense as well as for individual companies. But despite our increasing reliance on AI as a technology, we know very little about what’s going on under the hood. As the volume of data increases and the paths taken by AI to determine become more sophisticated, we as humans have lost the ability to understand and track those paths. What we are left with is a “black box” that is almost impossible to interpret.

The question comes up; How can we trust AI-based decisions if we cannot understand them? how are these decisions made? This is an increasing source of frustration for companies trying to ensure their systems are functioning properly, meeting the correct regulatory standards, or operating at maximum efficiency. Consider the recruitment team at Amazon who had to scrap their secretive AI recruitment tool after realizing it was showing prejudice against women. They thought they had the “holy grail” of recruiting – a tool that could scan hundreds of resumes and select the best ones for review, saving them countless hours of work. Through repetition and reinforcement, the AI ​​managed to convince itself that male candidates were somehow preferable to female ones. Had the team blindly trusted AI – which they did for a very short period of time – the consequences for the company would have been devastating.

When it comes to business frustration and the fear of putting too much trust in AI, the burgeoning field of CI is an ideal example.

The world of human interaction has been a beehive for AI innovation for years. It’s one thing to use Natural Language Processing (NLP) to create chatbots or transcribe speech to text, but it’s quite another to derive meaning and understanding from conversations. This is exactly what Conversation Intelligence (CI) does. It goes beyond deterministic “A to B” results and aims to analyze less tangible aspects of conversations such as tone, mood and meaning.

READ :  Short Conversations with Poets: Simon Armitage

For example, when deployed in a call center, CI can be used to determine call agent effectiveness, customer emotional state, or provide an automated call summary with action points. These are sophisticated and subjective interactions that don’t necessarily have to be interpreted right or wrong. If a call center wants to use CI to streamline interactions, train agents, and update customer records, it must be able to trust the underlying AI to do its job effectively. This is where explainable AI or “XAI” comes in.

Every company is different and has a different definition of what the system should learn and predict using its Conversation Intelligence Stack. And it is important that the solution provides the human actors using the system with a complete view of the predictions so that they can continuously approve or reject the predictions made by the system. Instead of adopting a black box deep learning based system to execute tasks, a modularized system where there is full transparency and control over every aspect of the system’s predictions is crucial. For example, a deterministic programmable system can be used to instead use separate systems for tracking the sentiment of a call, searching for topics, creating summaries, recognizing specific aspects like nature of the problem in a support call or requests in customer feedback calls, etc a unique deep learning system that does all these things. By creating such a modular architecture, the entire conversation intelligence solution is built to be traceable and deterministic.

When AI processes were simple and deterministic, trust in those processes was never an issue. Now that these processes have become more complex and less transparent, as in the CI example above, trust has become essential for companies looking to invest in AI. Mariarosaria Taddeo, in his still relevant, decades-old article, referred to this as “e-trust” – how humans trust computerized processes and to what extent we allow artificial agents to be involved in this relationship.

READ :  Conversations with Professor Ademola Dasylva: Biography, Spirituality, Activism and Scholarship

Explainable AI (XAI) is an emerging field of machine learning that aims to make these artificial agents fully transparent and easier to interpret. The Defense Advanced Research Projects Agency (DARPA) in the US is one of the leading organizations pursuing XAI solutions. DARPA argues that the potential of AI systems is severely limited by their inability to explain their actions to human users. In other words, a lack of trust from organizations prevents them from exploring the full range of what AI and ML could offer.

The goal is to develop a set of machine learning techniques that can produce explainable models that enable human users to understand and manage the next generation of artificially intelligent solutions. These ML systems will be able to explain their reasons, identify their own strengths and weaknesses, and communicate how they “learn” from the data fed to them. For DARPA, this is part of a push toward so-called third-generation AI systems, where machines understand the context and environment in which they operate.

For AI to reach its full potential, we need to move away from ones and zeros and into more subjective analysis. The technology is here, we just need more reasons to trust it.

Surbhi Rathore is CEO and co-founder of Symbl.ai. Symbl brings to life their vision of a programmable platform that enables developers and enterprises to monitor, act and comply with voice and video calls in their products and workflows at scale, without having to build their in-house data science expertise .