AI for Customer Care
DJ: You are listening to the Art of AI podcast with your host, Jerry Cuomo.
Jerry Cuomo: Thank you DJ. Hey folks, where do you stand on chatbots these days? I remember seeing a statistic a couple years ago stating that only a half a percent of people surveyed prefer talking to a chatbot versus a real person. Reason being that while they're decent and following the yellow brick road, meaning helping with common tasks like resetting passwords, but once you get off the path, you quickly spin out of control and this just leads to frustration. But that was then, and this is now. In fact, a recent Wall Street Journal article highlighted how Salesforce's AI platform is generating one trillion AI- powered predictions weekly, and it's enhancing customer service with GenAI. And it's helping with things like routine task management, understanding customer preferences with more personalized interactions, and creating new outputs that anticipate customers' needs, and it's getting very high scores from today's digital savvy users. So my guest today is an expert in conversational AI. She spent over a decade building AI agents of all types, and today, she is our specialist in customer support bots, powered by GenAI. Her name is Eniko Rozsa, and she's a distinguished engineer at IBM, as well as a fellow at the World Economic Forums AI Alliance, and she's a coworker and a friend, and she's joining us to share her experiences in AI driven customer support. And with that, I'd like to welcome Eniko to the podcast. Welcome, Eniko.
Eniko Rozsa: Hi, Jerry, and thank you for this opportunity to participate in your series. I'm really excited.
Jerry Cuomo: Oh, thanks. Let's get going. We have so much to talk about. So Eniko, can you start by sharing with our listeners why you love doing what you do?
Eniko Rozsa: Sure. Let's just start with what an incredible year it has been for AI, AI in general, and last year just brought this tremendous change to this field. And I'm really fascinated by AI. I have been working in the field of AI pretty much in my entire working life, starting from my degrees and just continued on. I was born in Budapest, Hungary, as you could probably hear from my accent, that's not North American, and I have a master's degree in electrical engineering. And then I moved to Canada and I started to do another master's degree. I was working deeply in AI. My thesis was on domain and constraint visualization in computer- aided design and I implemented this interactive intelligent design application with visual aids. So my thesis was about cutting edge AI back then, and it was just a step up from advanced analytics, where everything is very deterministic to provide this design optimization. And then I started to work at a local R& D firm, and I worked there for about a year when we got acquired by IBM, and I have been with IBM ever since. So this is my 30th year here. I spent the last 15 year, again, solely focusing on AI. I've been working in conversational AI since 2008. And if you remember, that was before Watson won Jeopardy! in 2011. I invented a chatbot, except we didn't call it chatbot, but a natural language- based query and dialogue system for IBM technical support. It's had this weird long name. Yeah, this is the area I love. It's fair-
Jerry Cuomo: Oh, wonderful.
Eniko Rozsa: ... tosay that I've seen a lot of technology changes in my career, but this change with generative AI is just coming with this unprecedented possibilities. It's affecting every industry, from customer care to transforming data centers, process optimization, like everything.
Jerry Cuomo: Yeah, wonderful. Eniko, if you somehow were able to vault yourself into the future from back then to today, would you recognize AI as it is today?
Eniko Rozsa: A lot changed. I wouldn't recognize it as it is. It's the capabilities. I think the ideas what AI should be or could be doing was always there. But what traditional AI could do, it was a struggle to create that first chatbot. It was an absolute struggle, even from natural language understanding to even just managing the dialogue. Everything was just a hassle. And now you could create a chatbot so much easier and it intelligently can discuss informations with you. It is context aware, it could help you with a number of things that we just dreamt of.
Jerry Cuomo: Okay. Eniko, you are known for your expertise in natural language processing and, in particular, I've had the privilege to work with you applying that to customer care. What is customer care and how and where does AI help?
Eniko Rozsa: So customer care is, let's just say it's everything, all interactions, all activities that relates to customers that a business or an organization provides. And customer care is incredibly important because there was this research done a couple of years ago and basically it said that the experience that the company provides is almost as important and its products or services, and customers just expecting to get fast, accurate answers, dealt with their problems on the channel of their choice, in a manner that helps them and not-
Jerry Cuomo: I see.
Eniko Rozsa: ...the providing organization.
Jerry Cuomo: Wonderful. Now, on the AI part, let's, perhaps if you can reflect on, generally, how does AI help and then we'll get maybe more specific about today's AI.
Eniko Rozsa: Generally, AI help to transform the contact centers in recent decades, like the interactive voice responses, agent assists, chatbots, they all significantly enhance the productivity of customer service agents. And then in the AI based solutions, allow the agents to shift to higher- value activities. And I've delivered virtual agents for banks, for telcos, in the oil and gas industry, in retail, in HR, and I'm sure there were more. But now, generative AI is radically changing how we could deliver this and how we could create these hyper- personalized customer care experiences.
Jerry Cuomo: I see.
Eniko Rozsa: Before we could still deliver virtual assistance, but it was fairly difficult and, in some ways, it was encrypted in some ways. There was AI in the intent recognition, but the way the dialogues were structured, it was very much predetermined.
Jerry Cuomo: Right.
Eniko Rozsa: But now, we could have context awareness and we have just this overall reduced dependence on the rule base systems.
Jerry Cuomo: Can you give us some examples, Eniko, of this in action?
Eniko Rozsa: Yeah. So, we have been delivering for a number of clients. For example, there is this telecom where they had virtual agents. We delivered the virtual agents for them and they were doing really well with them, but then they wanted to have an even better solution where they can look at all the incoming customer responses, whether they were inquiries or whether they were complaints or whatnot. And with the use of generative AI, they could actually analyze all of the questions that came in versus what they were doing before, where even with the traditional AI, they were sampling some of the data. They attended that data but couldn't cover the overall solution. But now, it's entirely different because with this solution we could be doing so much better and covering the entire solution.
Jerry Cuomo: Very interesting.
Eniko Rozsa: Another example is a bank where, again, we delivered virtual assistants. The virtual assistants was doing quite well. But then now, with the generative AI inclusion of the solution, we could make so much better intents recognition, we could summarize answers, we could find more appropriate documents than before. So it's enhancing and building on what we were delivering before.
Jerry Cuomo: Got it, got it. Eniko, can you tell us a little bit more about, for example, are you using off- the- shelf, large language models as an example? Are you fine- tuning models? Share with us what's in play here.
Eniko Rozsa: Yeah, all of the above. You could use large language models as they are, and you could exploit the capabilities, that is what options to deliver a solution. But you could also prompt- tune or fine- tune models and you could provide the more nuance, more specialized answers. There is, of course, there is a cost and there is a benefit with every one of these solutions. You could leverage the language model that understand the domain well enough. You have a faster experimentation, you have more skills, but it's limited to what it understand.
Jerry Cuomo: Right.
Eniko Rozsa: You could then do, for example, a parameter efficient fine- tuning where you could add limited training examples and then you could transfer to a domain and have better solutions.
Jerry Cuomo: I see.
Eniko Rozsa: You could have full fine- tuning where you ending a lot of training example, you update them all the model parameters and you get very efficient inference, stable performance. But there is a cost to every one of these options. So depending on what clients want, depending on what's available, we might choose an appropriate solution.
Jerry Cuomo: I would imagine that an off- the- shelf, large language model, as good and as large as it might be, it may not understand the products that the company is offering, it may not understand the organizational structure. And maybe with some additional fine- tuning with that, do you suspect maybe the performance and the outcome would be better?
Eniko Rozsa: It entirely depends on what the solution is, right? There are certain questions you want to ask. Who's going to do the training? Who's going to do the fine- tuning? Who will own the prompts that are sent to the model? Who's going to maintain this? Where does it go? So all of these you have to consider when you create a solution.
Jerry Cuomo: I see.
Eniko Rozsa: And if you want to create some text analytics or something similar, like classification, you could be doing a decent job with off- the- shelf available models. If you want to, as you mentioned, find answers and provide information on very specific products, very specific solutions, or in a very niche area, then you have to fine- tune-
Jerry Cuomo: I see. Yes.
Eniko Rozsa: ... By itselfit's not going to understand those.
Jerry Cuomo: And Eniko, can you share some of the gotchas or areas that companies should pay special attention to? For example, sensitive customer data sets and things like that.
Eniko Rozsa: I think right now there is a pressure to accelerate the use of generative AI everywhere. But at the same time, we really need to make sure that whatever we deliver, we are building it responsibly.
Jerry Cuomo: Yep.
Eniko Rozsa: Clients have concerns about the data, lineage, and provenance. They have concerns about security, they have concerns about the data privacy. So we need to build everything that builds on trust and transparency.
Jerry Cuomo: Makes sense.
Eniko Rozsa: We need to ensure that the language models that we are using, they have their documentation, they have their data cards, and we understand where the data is coming from, what rights there are to that data, what went into the algorithms, so that the companies could be sure that the technology is transparent. And then, of course, when we create an application, for example, with the chatbot, anybody could say or type anything to a chatbot. It's just, you cannot prevent the user from providing some very personal information. But what you could provide as a responsible application developer is make sure that you are filtering this data out and saving the user from actually sharing their personal data.
Jerry Cuomo: You started the podcast saying what an amazing year we've come off of, and I have to agree with you on that. I think generative AI has now made its mark. Can you share a wow moment?
Eniko Rozsa: I think the way generative AI could form sentences, that was the wow moment before when we wanted a chatbot to sound better than before, what we did is instead of having one answer, we had maybe five different answers, and then we randomly surfaced that to how the answer was formed.
Jerry Cuomo: Right.
Eniko Rozsa: It was still the same answer, but we actually had to put that work into it and come back with randomly surfacing, answering in a different way. And when I first interacted with ChatGPT and it came back with beautifully formed sentences, I was like, " Wow, this is where the chatbot should be. This is where it can go."
Jerry Cuomo: And in Hungarian?
Eniko Rozsa: In Hungarian, too. So this is actually very interesting, because I had a chance, or the opportunity, last year to speak in Hungary. I was invited to Hungary to discuss the effects of generative AI on our work and the workforce. There was a confidence held and I found out that they are working on a Hungarian large language model. And by the time this Hungarian large language model is developed and fine- tuned to specifically trained on Hungarian content, the commonly available large language models, they also converse very well, and it is a tough language to converse in. So three years ago, it sounded like a good idea and the expectation was that perhaps they will only work well in English, but now we could say that it does a really good job in Hungarian as well.
Jerry Cuomo: Wow, wow. That is a wow.
Eniko Rozsa: Good, good.
Jerry Cuomo: All right, Eniko, I want to thank you for joining us on the podcast. Thank you very much.
Eniko Rozsa: Thank you, Jerry.
Jerry Cuomo: Okay, this podcast is a wrap, and I've included some reference material in this episode subscription section. And once again, I'd like to thank Eniko and I'd also like to thank you all for listening in. Until our next episode, this is Jerry Cuomo, IBM fellow and VP of technology. See you soon.
DESCRIPTION
In this episode of the Art of AI podcast, host Jerry Cuomo engages with Eniko Rozsa, a Distinguished Engineer at IBM and a Fellow at the World Economic Forum's AI Alliance, to discuss the transformative impact of generative AI in customer support. Eniko brings over a decade of experience in building AI agents and shares her journey from developing her first chatbot to leveraging today's sophisticated generative AI models for enhancing customer care. As they examine the shift from traditional AI approaches to the nuanced capabilities of generative AI, listeners will gain insights into how AI is revolutionizing customer interactions, providing hyper-personalized experiences, and improving the efficiency of contact centers. Tune in to discover the challenges, breakthroughs, and future possibilities of AI in customer support, and hear Eniko's unique perspective on the evolving landscape of AI technologies.
Key Takeaways:
- Generative AI significantly improves customer support by offering more personalized and contextually aware interactions, as detailed by Eniko Rozsa.
- Responsible development of AI is crucial, focusing on data privacy, security, and transparency to build trust in AI-enhanced customer services.
- The evolution from simple chatbots to advanced AI systems demonstrates a substantial leap in AI's ability to handle complex customer service tasks effectively.
References:
- "Chatbots vs. Humans in Customer Service," Weply Blog, June 24, 2021.
- "Revolutionizing Customer Service with Generative AI," Wall Street Journal, January 25, 2024, featuring Salesforce AI CEO Clara Shin.
- "Impact of Generative AI on Customer Service" by Rosane Giovis and Eniko Rozsa, CMSWire, November 14, 2023.
* Coverart was created with the assistance of DALL·E 2 by OpenAI. ** Music for the podcast created by Mind The Gap Band - Cox, Cuomo, Haberkorn, Martin, Mosakowski, and Rodriguez