Don’t miss the latest developments in business and finance.

Uniphore will continue to define conversational AI market: CTO Raghavan

Over the last decade, we have evolved our product suite from using external AI solutions to building our own algorithms and AI services, said Raghavan

BALAJI RAGHAVAN
Balaji Raghavan, CTO, Uniphore Technologies.
Shivani Shinde
6 min read Last Updated : Jun 26 2022 | 7:45 PM IST
Earlier this year, Chennai- and California-based Uniphore Technologies entered the unicorn club at a valuation of $2.5 billion, with a $400-million fund-raise led by New Enterprise Associates. The company specialises in speech recognition and conversational artificial intelligence (AI). Chief Technology Officer BALAJI RAGHAVAN, in an email interaction with Shivani Shinde, explains its unique tech platform, how conversational AI is maturing, and where the company is headed. Edited excerpts:

What is Uniphore’s unique proposition?

We have been working on understanding and enabling faster and more efficient interactions through the contact centre use case to drive empathetic conversations for over a decade. Through our expansion to support businesses globally in several industries, we have been able to build the knowledge of the context of these interactions and the intents and expected metrics for the success of these conversations, and the technology to not just operate with verbal cues but also drive complete understanding of the emotion through facial expression, body language, and tonal changes when any or all those signals are available. As humans, we use more than just our words to communicate.

Conversational AI has been in the works for years, but it’s had real impact only in the last few years. What is driving this?

The research areas relevant to conversations have seen rapid innovation and improvements in accuracy with deep neural networks and large models throughout the past decade. This has been true for areas of natural language processing/understanding, automatic speech recognition, and computer vision (facial expressions, gesture recognition, and eye-tracking). At the same time, the hardware to run large models efficiently has become more readily available and cost-effective.

While this has led to the proliferation of applications of AI in this space, we are still in the early stages of being able to make AI understand any conversation with a human and potentially respond with empathy. Multimodal cognitive AI is a broader area of research in its early days, and I believe it will lead to more significant disruption in the future.

Chatbots have not served their purpose. Several studies say that users prefer speaking to a customer relations officer. How is Uniphore’s solution different?

I agree that chatbots, while used to refer to conversational AI, were the earliest and the weakest solution for customer service. Uniphore has invested in making products that work across digital and voice media for the last decade. With the investment in emotional intelligence through Uniphore’s Q for Sales, we have also started adding assets of understanding human emotions in discussion in the video format.

When someone talks to us, we don’t just pay attention to the words said, but also to the tone of their voice and their body language, to infer expressions like sarcasm, anxiety and happiness. To provide the best customer service experience, Uniphore uses multiple modalities of expression to infer a customer’s true emotion.

Creating conversational AI needs a lot of back-end work, collecting data on dialects, and language nuances. And this data must be of high quality to make an impact. How did Uniphore cross these hurdles?

Data is the hardest problem for conversational AI, and the challenge is the lack of coverage in data for dialects and other nuances. Over the last decade, we have evolved our product suite from using external AI solutions to building our own algorithms and AI services. This allowed us to go to market sooner and then evolve our solutions by better understanding the user’s needs. In contact centres, there are only two participants in the conversation, and in all cases, the agent is expected to help resolve the customer call with known processes and empathetic responses.

This helped us focus more on understanding the domain of our customer (for example, healthcare, telecommunications) in our natural language models for intent recognition. So, we need enough coverage in data for each domain to make our baseline models reusable across domains. We bootstrap for a new customer with generic models and then improve them over time in the customer environment, as the calls start flowing. This helps protect the privacy of the customer’s data while continuously enhancing their core metrics.

What’s next in terms of how far Uniphore intends to take its conversational AI platform?

Uniphore is continuously pushing the meaning of the term “conversational AI platform” to be broader than its previous definitions. We added the ability to use automation through the acquisition of Jacada last year and then added knowledge AI capabilities through the acquisition of Colabo this year to our artillery.

A complete conversational platform is not only about the AI algorithms but also the lego pieces that provide the right knobs to become applicable in several use cases. At Uniphore, our AI platform that was built for contact centres was extended with video AI capabilities to create a new product -- Q for Sales -- for empowering conversations in another part of the organisation.

Where do you see Uniphore in the next five years?

Uniphore will continue to define the market of conversational AI and automation. These two technologies go hand-in-hand for enterprise applications. We will enable our customers to use the latest and greatest in conversational AI technology seamlessly to improve their core business metrics through a platform that provides the right configuration levers. I expect us to have brought solutions that bridge language barriers to conversations in the Enterprise on the research front.

Meta wants to break language barriers, and AI will be instrumental in doing so, it says. How does Uniphore see this evolving, and what has Uniphore achieved in different languages?

Mark Zuckerberg has set the vision for Meta to enable conversations in the Metaverse between two individuals who do not speak each other’s languages. Meta, Google, Open AI have all pushed the ability of AI to create large multilingual models that can understand conversations involving different languages. Through the Covid-19 pandemic, several industries moved to hire a remote workforce that is culturally and geographically diverse. At Uniphore, we primarily focus on conversations in the Enterprise, so this problem is very relevant.

Along with our investments in supporting base languages like English or Hindi, we also added capabilities for derived languages like Hinglish. We continue to focus on feedback from our clients about how their customers communicate with their agents and evolve our research through strategic collaborations with premier research institutes like IIT Madras.

Topics :Artificial intelligenceUniphore

Next Story