Outside In: A Financial Industry Perspective
Speaker 1: You're listening to the Art of AI podcast, with your host, Jerry Cuomo.
Jerry Cuomo: Hey everyone. Welcome to the Art of AI for Business. Folks, today's episode is the first of a series of outside in perspectives on AI. For this, I've invited a colleague of mine, Riccardo Forlenza, to be the guest host of the Art of AI. Riccardo is our global managing director for financial services and comes uniquely qualified to look at technology trends through the lens of industry. Today, he is hosting his friend and colleague from Citigroup, Murli Buluswar, whose services include chief revenue, growth and customer officer. As you're about to hear, Murli's distinguished track record of novel business strategy gives him a uniquely excellent perspective on how to drive innovation and champion value creation with new technology, technology like AI. Without further ado, folks, here's Riccardo and Murli, an outside in perspective on AI from an industry lens.
Riccardo Forlenza: Murli, it's terrific to have you.
Murli Buluswar: Good morning, Riccardo. It's such a delight to be having this conversation with you. Thank you to Jerry and you for inviting me to have a conversation on this topic.
Riccardo Forlenza: Gen AI has taken the world by storm. I think it was just about a year ago then we first became from familiar to having this term be pervasive in every scene or conversations we've had. Give me a sense for how you have seen the evolution of gen AI. Why has it taken the world by storm?
Murli Buluswar: The leap that we've had as a humankind from an agrarian society to an industrial society was fairly significant. Then there was a leap from an industrial to, and call it an information age type society. Now, you've got this leap from an information age to an autonomous/ artificial intelligence driven society. That leap in my view is bigger and more consequential perhaps than what we've seen in the past. In the past, we've been able to train people to make that adaptation. In each of those three eras, this one probably is a whole new paradigm, and what's actually guiding it. In my mind, it's really three things. It's data, it's compute, and it's algorithms. Those are the three underpinning drivers. The big change in my view that has actually happened is the compute environment continues to improve and that allows us to think differently in how we develop sophisticated algorithms and what decisions and outcomes those algorithms could ultimately drive.
Riccardo Forlenza: Only, there are broad policy implications and a battle legislation that's still in its formative stages. Europe has come out with a framework that's probably market leading at this stage. The US has just joined this path, if you will, the executive order that President Biden signed just a few days ago. What do you think is the real role that policymaking needs to have in this space to ensure that we optimize value and minimize impact on those that might be displaced by the advent of gen AI?
Murli Buluswar: That's the million- dollar question, isn't it? The first thing that I'd say, Riccardo, from my perspective is that the power of generative AI is probably being exaggerated/ overestimated in the here and now. Over time, it will probably be underestimated a little bit. Like any new innovation that is meaningful, people tend to overestimate the impact in the near term and underestimate the impact in the longer term. I believe that we're squarely in that hype cycle where we think everything's going to be thrown up in the air and be reframed that our whole lives as we know it will be disrupted and that machines are going to rule over human beings. Not so sure that I'm ready to make that leap at the moment. Maybe in a few years I might actually become a little bit more sophisticated in my understanding of what it means to be sentient, perhaps. At the moment I think the big question is, hey, how do we recognize this chasm between the precipice that we're in today and where the future could be? How do we think about structurally how some of these capabilities will look different tomorrow? I think that's a profound question. How many jobs that will replace is to be determined and how many jobs that might create is also to be determined? I think these are all unasked questions. What we do know is that the future, or what I think we do know is that the future is not a linear extension of the past. Regulations will be iterative. They will never be perfect. I think that there are very fundamental public policy questions that you're alluding to around what is the role of a human being versus a machine in some of these areas? Do we need fewer people doing higher order work so that machines can do more of the slightly" mechanical processing" that we've done historically? Which would mean that, that nature of the intersection or the harmony between human intelligence and machine intelligence has got to look different. Let's remember that it's generative AI, it's not creative AI. The difference being that generative AI is essentially pulling together content and data from data that has been fed into it. Where, to me, the future is headed is that human beings will have to have the superpower to be able to ask the right questions, to shape the context, to draw inferences, and to be able to look around the bend and understand what could be and what should be. The role of the human being is no less critical tomorrow than it has been historically. In fact, I think even more so.
Riccardo Forlenza: You touched on two fascinating aspects of this. I think many of the business leaders around the world are now focused on one dimension of this, which is, isn't this a great cost take- up play? Could we not do with far fewer resources if we leverage this technology? Cost is I think, central to a lot of the pursuits that I now see. There is also this other dimension that you were talking about, which is justness of what we do and how we go about it. How do we ensure that since the machine, praise the Lord, isn't creative, but simply generative, produces outcomes that are aligned with our intent rather than diverging from it and possibly cementing answers that aren't aligned with our real strategic intent. Can we focus on both dimensions and maybe we could spend a minute or two on cost and a minute or two on the justness of the answers and how do we best arrive at them?
Murli Buluswar: In that first wave of the crawl, walk, run principle of adapting to a world of AI for me is to some extent floundering our way into things, i. e., we will make mistakes. We are breaking new grounds. Hopefully, we will learn from each other's mistakes and make different kind of mistakes in the future. Mistakes we will make, absolutely in my mind. Therefore, for me, the first wave of opportunity is having a structured framework through which we identify problem or opportunities for innovation within institutions regardless of which industry they're in. Maybe it is something along the lines of what do people do manually in terms of processing information, processing data, synthesizing content that could be done cheaper, faster, better by machines? Where is that manual intervention causing errors that is causing either customer pain or other forms of regulatory exposure regardless of which industry you're in? Has a process been innovated in the last couple of decades? How has it been recast in a world that up until now has definitely achieved leaps and bounds innovation through the lens of machine learning or more traditional AI or whatever moniker we choose to use? Then is there a sizable cost associated with it? I think we've got a lot of learning to do. The more of that learning that firms can do in a way that is internally focused as opposed to externally focused. If they can take a measured approach around having a human in the loop and monitoring how these models are working and having clear metrics of success, they give themselves that momentum to continue to expand their understanding of the art of the possible. Yes, I would start with cost- savings opportunities. I doubt that they're as big as they're being touted at the moment, but they might be meaningful. More importantly, it's also recognizing that you're re- architecting the role of machines versus humans.
Riccardo Forlenza: I work with a client in audit. I've had a good fortune of an up- close view of how transformative gen AI could be to a function like that, the risk management function. Because candidly, you go from sampling to doing full set reviews. Not only do you cut costs, but you also a lot broader in the impact you have on the firm and the ability to then offer an accurate perspective on the state of the business. Critically important. I've also had a number of conversations about potential intricacies of cementing gen AI algorithms in how, for instance, we underwrite businesses, mortgages, for instance. A lot of biases that were never intended to be part of our decision making. If they get cemented in a gen AI algorithm, they would probably cast a long shadow and have an impact on society that we never intended it to have. I would say for all the industries, but certainly for banking, which is probably one of the ones that's going to be most effective, but it's at least in the very near term. What are your views on the banking sector? Where do you think gen AI will have the greatest impact? What would you recommend a fellow senior banking executive how to start thinking through this and operationalizing some of the aspects of gen AI?
Murli Buluswar: Maybe one particular thought first based on your commentary is to recognize that at the end of the day, the foundation for anything gen AI is data. Data that reflects the past. The past is far from perfect. Not that the present or future will be perfect. The beauty and the risk of these algorithms are twofold. The beauty is that they can actually create visibility and transparency on the biases of the past. The human biases that may have otherwise been a little bit hidden. They might have been obfuscated behind data that doesn't necessarily make it as obvious. The risk is that these algorithms, if not monitored and understood, cannot only perpetuate the biases of the past, but can accelerate the biases of the past. The power of AI is that it can as much be used as a mechanism to understand and rectify the mistakes of the past as it is to accelerate and create more fairness and speed and clarity in a decision- making if used properly. It's easy to get enamored with, hey, I'm going to save 20, 30, 40, you pick a number, million dollars of expense as a consequence of this capability. You probably will. However, in that desire to achieve that outcome, one cannot skirt the true complexity, the gnarly- ness of having to go through a learning process of where things could go wrong, how they could go wrong, and how you have a tight understanding of the implications of that and course correct on a consistent basis. That monitoring is critical.
Riccardo Forlenza: I think one of the aspects that many gen AI participants, it's now becoming a fairly well- traveled path. My view, inadequate focus on governance, which is central to having a positive long- term sustainable impact on our own businesses, but also, as we were saying before, on society. That's what I'm hoping that a combination of private sector but also, policymaking input will provide a framework that we can quickly coalesce around so that it can provide impetus to all.
Murli Buluswar: Indeed. For me, and there is in my view, an absolute steep learning curve on that front. Not that the capability isn't there, it's very powerful, and not that it won't be transformative, in my view, it absolutely will be. Rather to say, please don't be oblivious to the realities, the technical realities of bridging the gap between where we are and promised land and really avoid the risk of oversimplifying that pathway. Because at the end of the day, if we're trying to get algorithms to mimic some aspect of what humans do, that leap is not going to happen just like that. It's going to take some work.
Riccardo Forlenza: Murli, this is a question that's probably that's set on my mind, they're probably in the mind of many of my colleagues. What is really going on with the boards of our largest financial institutions? How are they thinking through it? I will be controversial, and I'll tell you that I'm always a little puzzled by the limited technology fluency of most banking boards around the world. It is a real opportunity to up- skill our boards on a dimension that's central in my view, to not only survival, but also to thriving in financial services. What do you think is the real dialogue that's going on and what extent are boards equipped to make broad decisions that will cast a long shadow?
Murli Buluswar: I'm going to share a view in my conversations with people whose roles transcend multiple industries, obviously, including financial services, Riccardo, is number one, I'm happy the phrase AI in a meaningful way is starting to take root in conversations. Perhaps nowhere close to the full knowledge that people need to have, but at least at a minimum with some degree of fascination/ curiosity/ fear/ optimism on what is feasible. That's probably one dimension of the emotion that many senior executives across industries are facing, including board members. The other bit is this sense of apprehension to say, well, I don't really understand this. I don't understand how it works. If I don't understand how it works, how can I trust this? Then the third dimension perhaps is if it's a very powerful capability, how do we have the right operating discipline around managing it in a methodical way so that we don't go off the reservation and create unintended problems that whither their ugly heads in many ways? Particularly in regulated institutions. In my view, in any industry where you're consumer or perhaps even B2B facing. That's probably the range of what people have, emotions that people have. Then how do you take those excitement, that curiosity and that apprehension and make that very real? We end up starting off with a couple of use cases. Now, I personally, I'm a little bit wary of the phrase use case. It's the mindset of a tool in search of a problem, and then the focus becomes the tool. My view is let's actually pivot that a little bit to think about what would re- architecting critical aspects of our operations in a 12, 18, 24 months look like? Let's back our way into how do we make that two- year view a reality and recognizing that it'll be a rolling two years? I. e., you'll continue to evolve and you'll add and your timeline continues to extend. Then you're no longer saying, hey, I need a couple of gen AI use cases. What should I do? Rather, you're actually connecting what is art of the possible for how you want to re- architect decision- making in your firm with a recognition of where the world is heading and what could be. Then you're backing into what tools do you need and how do I make sense of that? How do I have an operating discipline around measurement and what is my governance and so on and so forth?
Riccardo Forlenza: It's a delicate balancing act to an extent. I guess we're all anxious to prove the hypothesis that gen AI can in fact be fit for our purposes, for today's purposes. By the same token, I think what you're pointing at is that that we risk folding into the trap of having, rather than solving the through processing issue, we end up with the computer- aided manual process. We end up having spot little solutions that don't really think more holistically about the business issue that we were looking to address don't fundamentally change the architecture of it. Rather, go in and solve very narrow, interesting, but not transformative small issues that don't necessarily change the firm.
Murli Buluswar: Indeed.
Riccardo Forlenza: Murli, I'll ask you one final question. It's now tomorrow and gen AI has become part of the fabric of what we do. What does that work look like and when do we get there?
Murli Buluswar: I was going to ask you what timeframe is tomorrow? I'm going to pick a number. I'm going to say 10 to 12 years timeframe in how I could imagine the world going. Number one is, we might not be working as many long hours in several professions as we have historically. Number two is, many large country governments may have evolved to recognize that people might be working fewer hours or you might have fewer working people. You need to have public policy perspectives on how to manage that. Number three is, No profession will cease to exist. The gap between the high performers and everybody else in each of those professions will widen more. I. e., the world will belong to people who can think about how they're re- architecting their careers with technology in ways that they didn't have to do historically because they're going to be able to do it cheaper, faster, better. Whether that is in healthcare, whether that is in banking, whether that is in insurance, or pick any industry that you would like. How you understand the role of large language models versus medium language models versus small language models. What context in which you apply it and your ability to monitor and understand and create near real- time course corrections in order to mitigate the risk of unintended consequences in order to create more transparency on fairness so that you are not only making things cheaper, faster, better in many aspects, but you're also correcting for the bias and opacity of human behavior for the last several decades on which these models are built.
Riccardo Forlenza: Murli, you've said some of the words that are near and dear to my heart, opacity, transparency, governance, biases, drift. Those are all critical aspects of what I'm going to term is a bit of a fiduciary responsibility that many of us who are in this space and can affect how enterprises and governments ultimately fare in this broad new future that we are trying to chart for ourselves. Critically important dimensions. I generally appreciate the time you spent with me. I also had the good fortune of working some of these issues alongside you, which is just a thrill. Much appreciate your time and patience and thanks again for making time for us.
Murli Buluswar: Riccardo, thank you. This was absolutely delightful. I look forward to being on this journey with you, my friend.
Jerry Cuomo: Thank you, Riccardo and Murli. Well, folks, here you go again, the themes repeat. Trustworthy AI, responsible AI, the importance of governance and the advice to jump right in with both feet, but measure along the way. That's the spirit and the art of AI for business. This episode is a wrap. I've included relevant links in the description section of this podcast episode. Until our next episode, this is Jerry Cuomo, IBM fellow and VP of Technology. See you soon.
DESCRIPTION
Join Jerry Cuomo and guest host Riccardo Forlenza in a riveting discussion with Murli Buluswar from Citigroup on 'Outside In: A Financial Industry Perspective'. This episode offers an in-depth exploration of generative AI's role in transforming industries, particularly financial services. Listen as they discuss the historical evolution of AI, its current and future impact on policy and business strategy, and the importance of balancing technological advances with ethical considerations. Murli provides unique insights into how AI is reshaping decision-making processes, emphasizing the need for governance, transparency, and human-centric approaches. A must-listen for professionals navigating the ever-evolving landscape of AI in business.
Key Takeaways:
[00:00:17 - 00:01:36] Intro to "Outside in" series;
Today featuring Riccardo and Murli.
[00:01:49 - 00:03:24] Gen AI Evolution
Discussion on the evolution and impact of generative AI, emphasizing its societal significance.
[00:03:25 - 00:06:57] AI Policy and Implications
Exploration of policy challenges and the need for effective governance in the era of AI.
[00:07:52 - 00:10:08] AI Ethics and Efficiency
Balancing cost savings with ethical considerations in AI deployment and innovation.
[00:19:21 - 00:22:05] AI's Future Impact
Speculation on the future role of AI in work and societal changes, with a focus on adapting to and managing biases.
Explore more about AI Governance
* Coverart was created with the assistance of DALL·E 2 by OpenAI. ** Music for the podcast created by Mind The Gap Band - Cox, Cuomo, Haberkorn, Martin, Mosakowski, and Rodriguez