How can contact centers use AI-powered chatbots responsibly?

Chatbots have been constantly evolving for years. However, in 2022, they showed they are ready to make a big leap forward.

When ChatGPT was introduced a few weeks ago, the tech world was ecstatic. The New York Times Tech columnist Kevin Roose dubbed it “quite simply the best artificial intelligence chatbot ever launched for the general public,” and social media has been deluged with examples of its ability to churn out convincingly human-like prose.[1] Some venture capitalists even went so far as to say that its launch could be as earth-shattering as the launch of the iPhone in 2007.[2]

Indeed, ChatGPT looks like it represents a major advancement in artificial intelligence (AI) technology. But as many users quickly discovered, it’s still plagued with many flaws – some of them serious. Its advent not only signals a turning point for AI development, but an urgent call to expect a future that is arriving sooner than many anticipated.

Basically, ChatGPT brings a new urgency to the question: How can we develop and use this technology responsibly? Contact centers cannot answer this question alone, but they have a special role to play.

ChatGPT: What is all the hype about?

Answering this question first requires an understanding of what ChatGPT is and what it represents. The technology is the brainchild of OpenAI, the San Francisco-based AI company that also released the innovative DALL-E 2 image generator earlier this year. It was released to the public on November 30, 2022 and quickly gained momentum, reaching 1 million users within five days.

The bot’s capabilities stunned even Elon Musk, who originally co-founded OpenAI with Sam Altman. He echoed many people’s opinions when he called ChatGPT’s voice processing “scary good”.[3]

READ :  What is NovelAI: Features, pricing, and alternatives

So why all the hype? Is ChatGPT really that much better than all previous chatbots? In many ways, the answer seems to be yes.

The bot’s knowledge base and language processing capabilities far outperform other technologies on the market. It can provide quick, essay-length answers to seemingly countless questions, covering a wide range of topics, even answering in different prose styles based on user input. You can ask him to write a formal letter of resignation or compose a quick poem about your pet. It turns out academic essays with ease, and its prose is persuasive and, in many cases, accurate. In the weeks following its launch, Twitter was inundated with examples of ChatGPT answering every type of question users could think of.

The technology, as Roose points out, is “smarter. stranger. More flexible.” It can really usher in a flurry of changes in conversational AI.[1]

A Wolf in Sheep’s Clothing: The Dangers of Disguised Misinformation

Despite all of its impressive features, however, ChatGPT still suffers from many of the same flaws that have become known in AI technology. In such a powerful package, however, these flaws seem more ominous.

Early users reported a variety of issues with the technology. For example, like other chatbots, it quickly learned the biases of its users. It wasn’t long before ChatGPT was spouting offensive comments that women in lab coats were probably just janitors, or that only Asian or white men were good scientists. Despite the system’s reported guard rails, users have been able to train it to provide these types of biased answers fairly quickly.[4]

What’s more worrying about ChatGPT, however, are its human-like qualities, which make its responses all the more compelling. Samantha Delouya, a journalist for Business Insider, asked it to write a story she had already written — and was shocked by the results.

READ :  Generative Agents, the next frontier in artificial intelligence, bring virtual worlds to life

On the one hand, the resulting piece of “journalism” was remarkably accurate and accurate, if somewhat predictable. In less than 10 seconds, a 200-word article emerged that was quite similar to what Delouya might have written, so much so that she called it “frighteningly compelling.” However, the catch was that the article contained fake quotes made up by ChatGPT. Delouya spotted them easily, but an unsuspecting reader might not have.[3]

Therein lies the catch with this type of technology. Its mission is to produce content and conversation that is compellingly human, and not necessarily telling the truth. And that opens up frightening new avenues for misinformation and – in the hands of nefarious users – more effective disinformation campaigns.

What are the political and other implications of such a powerful chatbot? It’s hard to say – and that’s what’s scary. In recent years we have already seen how easily misinformation can spread, not to mention the damage it can do. What if a chatbot can mislead more efficiently? and convincing?

AI must not be left to its own devices: the test solution

Like many others reading the ChatGPT headlines, contact center executives might be wide-eyed at the possibilities of applying this advanced layer of AI to their chatbot solutions. But they must first address these issues and create a plan for using this technology responsibly.

Careful handling of ChatGPT – or whatever technology comes after it – is not a one-dimensional problem. No single actor can solve it alone, and it ultimately boils down to a set of issues affecting not just developers and users, but also public policy and governance. Still, all stakeholders should try to do their part, and for contact centers this means focusing on testing.

READ :  “Unstoppable” AI raises concerns and the need for regulation

The surest route to chaos is to simply leave chatbots alone to handle any user question themselves without human guidance. As we’ve already seen with the most advanced form of this technology, it doesn’t always end well.

Instead, as contact centers deploy increasingly advanced chatbot solutions, they must commit to regular, automated testing to catch bugs and issues as they arise and before they escalate into larger problems. Whether they’re simple customer experience (CX) errors or more serious information errors, you need to catch them early to fix the problem and retrain your bot.

Cyara Botium is designed to help contact centers keep chatbots at bay. As a comprehensive chatbot testing solution, Botium can run automated tests for Natural Language Processing (NLP) scores, conversation flows, security issues, and overall performance. It’s not the only component in a complete plan for responsible chatbot use, but it’s a critical component that no contact center can afford to ignore.

Learn more about how Botium’s powerful chatbot testing solutions can help you keep your chatbots in check and contact us today to set up a demo.

[1] Kevin Rose, The brilliance and weirdness of ChatGPTThe New York Times, 12/5/2022.

[2] CNBC. “Why tech insiders are so excited about ChatGPT, a chatbot that answers questions and writes essays.”

[3] Business Insider. “I asked ChatGPT to do my job and write an insider article for me. It quickly became a shockingly persuasive article full of misinformation.”

[4] Bloomberg. “OpenAI chatbot spits out biased thoughts, despite guard rails.”