Data and Digital Ethics in ESG

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Data and Digital Ethics in ESG. The summary for this episode is: <p>Charles Radclyffe is our guest on this episode of the Georgian Impact Podcast. He is a partner at EthicsGrade who specializes in evaluating companies' ESG credentials. In other words, he hunts for watermelons—companies that look green on the outside but are anything but in the middle.</p><p><br></p><p>You’ll hear about digital ethics and how it fits with ESG, the need to engage with stakeholders and find out what matters to them and how the AI Act will affect those working in high-risk areas.</p><p><br></p><p><br></p><p><br></p>
What EthicsGrade is
00:17 MIN
Digital Ethics and how it fits with ESG
00:25 MIN
The need to engage with stakeholders and find out what matters to them
00:30 MIN
How the AI Act will affect those working in high risk areas
00:56 MIN
The different types of customers that EthicsGrade works with
01:22 MIN

Jon Prial: I love thinking about market maps and sub markets and adjacent markets. And here's a dirty little secret: I even like to think about how to label axes, so now you know my secret. In addition to being a podcast host, I am a total market research nerd. And whether it's about a podcast or learning about a market, what I like doing is digging into something new and then how that leads into something even newer. So what's new? What's hot? ESG is hot, environmental, social, and governance. Maybe I'll make a Venn diagram, but today we're going to be talking about ESG. And then we're going to talk about a very new element of ESG that looks at technology and its ties to bias, fairness and more. And let me tell you, this is a big deal. So I'm very excited to be talking with Charles Radclyffe, AI Ethics, technology governance and ESG specialist at a very cool startup called Ethics Grade. So let's start our journey through a market map. We'll walk our way through E S and G, and we're going to dig into AI and ethics. I'm John Pryor and welcome to George's Impact podcast. Charles, welcome.

Charles Radclyffe: Hey, John. Good to be here. Thank you very much.

Jon Prial: And just tell me a bit about Ethics Grade, please.

Charles Radclyffe: Yeah. So Ethics Grade, we're an ESG ratings business. And what that means is we evaluate companies, environmental, social governance, credentials. Or a better way, I like to call it, we hunt for watermelons, companies which look green on the outside, but are anything but in the middle.

Jon Prial: Talk to me about ESG and why this is so important and who cares.

Charles Radclyffe: So I think that the most exciting thing about what I do is that for me, it's putting threads from all aspects of my life into one thing. I guess I've always had a difficult relationship with capitalism because I've always seen the downsides. I've always seen the risks. I've always seen the harms and the wider impact that perhaps hasn't been necessarily in focus the whole time, but like every entrepreneur, I've been playing the game. And so for me, I guess, stumbling across ESG when I was at Fidelity, it struck me that not only is this a mega trend for our time, but it's also something which has as a huge requirement for disruption in its own right. I mean, the way that ESG works a day is fairly broken. So I think it's a double whammy for me. It's something which has high potential impact and high opportunity. And for me, those two things are great coming together.

Jon Prial: Is it real that I see companies always put out their annual reports? And now I'm seeing companies putting out ESG reports. Are they doing it, because they care, or are they doing it because they have to?

Charles Radclyffe: I would say that in the last century, the 20th century was really dominated by shareholder value. That was what management teams were put on this planet to do. They were there to maximize the shareholder returns. And in the 21st century, I think something has shifted. And the shift is really about stakeholder value, and stakeholders are more than just shareholders. It's employees, it's customers, it's the communities that organizations are based in as well. And the starting point for this is really CSR, corporate social responsibility. And I think everyone will be familiar with this, that companies have done charity runs and good initiatives for the local communities. But these tend to be very moment in time point activities, which are un- strategic in that they are ground up. They're from people within organizations. And you might occasionally have like a head of CSR that's coordinated these things, but there's a major difference between essentially CSR and ESG. ESG is really about how is the impact of an organization interwoven into its operations? What's the strategic lever that the board will be pulling? And then how does that trickle down and flow into all aspects of an organization's work? And then of course, what are the reporting and disclosures that come up within the organization so that they can run those things effectively? And then of course, external reporting as well. And so, yeah, the ESG reports I think is as important as the annual corporate filings.

Jon Prial: So companies will care and these reports are really important. Besides the company, there are funds and investors that seem to care. And I don't know if it's true or not, but I've also heard that if you take a plain mid- cap investment fund and an ESG investment fund, are they beginning to cross if the ESGs are outperforming now? Is there some truth to that?

Charles Radclyffe: There's inaudible schools of thoughts on this. I mean, there's definitely a school of thought to say that ESG is an important investment topic, because it delivers alpha and it's good for your pockets. I'm not in that group of people. And I guess it might be a strange thing to say, given the fact I run an ESG company. Think of it like a three- legged stool. You've got your shareholder return as one of those legs. You've got to deliver financial performance. The second leg is really around risk management. There are things which you can do, which might be high risk, high return, and there's things which you might do, which are low risk, but low return. And I guess you've got to have your risk leavers as an organization tightly controlled, and actually has been the discipline of the 20th century. And I think the third leg of that stool is ESG. What is the impact we want to have? Do we care about the impacts on communities? Do we care about equality? Do we care about diversity? Do we care about the environment? And I think what's changed in the last 20 years is that people do care much more about those things. And then people are starting to measure, monitor and pull those leavers. So I guess that's the way I would think about it, is it's not a replacement, it's a balancing act. And I think the great companies of this century are the ones that get that balance right.

Jon Prial: Now, you mentioned climate. What's within ESG? How broad a space is it?

Charles Radclyffe: For starters, it's a really bad acronym because it stands for environmental, social and governance, which are three related but not that related topics. And there are people, particularly in the Germany, Swiss region that talk about ESG and D, the D being digital responsibilities and digital ethics. I think three, that's enough personally, so I think ESG is plenty a mouthful. But for me, it's not about the letters. It's about all of an organization's non- financial impact. It's really about alignment of an organization to values. And so, regardless of whether you cover plastic pollution or animal rights or human rights or diversity or AI ethics, it doesn't really matter which part of this ESG umbrella that you cover. I think it's all really... the concern is what is the impact that an organization is having outside of its financial results? How is that organization aligning its activities to values. From an investor perspective, how do investors align their capital to values? And I think those are the questions.

Jon Prial: Interesting. I hadn't really thought about it that way, because I always see a list. I see labor conditions or gender pay caps or climate. They all matter, but it matters differently to different companies. So we're going to talk about watermelons shortly. One of the watermelons would be Amazon declaring that their drivers don't have to urinate in bottles. And of course, having to backtrack from that. So that's a labor condition issue, which has some impact, which might be different than... they may be driving electric cars and still can't stop to go to the bathroom. You mentioned the three legs of the stool and ESG is one of the third legs, but within that leg, there's a whole different bunch of elements. Companies have to consciously decide. And they do, right, have to consciously decide what they're going to focus on to have the most impact. Is that fair to say?

Charles Radclyffe: They do. And I guess the question is how do they answer that? And I think there's a big difference there between big companies and small organizations. And what I've found, we've been focused mostly on large asset managers and hedge funds. But one of the things that's been really interesting recently is I've had a lot of small cap CEOs or even startup scale- ups getting in touch and saying they've received pressure from their investors to be more ESG in quotes, whatever that means. Usually, it's the second line that comes out of that. One CEO said to me, I've been given the list of the UN SDGs, the sustainable development goals, and I've got to try and figure out how my business maps against that. And he was pulling his hair out because he didn't see the relevance at all. And I think this is the key problem, I guess, with ESG is looking at it as a one size fits all. What really matters, if you look at the things that have gone wrong in terms of... A great example recently is base camp. Base camp is a slack alternative, project management alternative. I think it's not a particularly big companies, it's maybe 50 to 100 people or so. And I think a third of the workforce have walked out in a very short period of time. Why? Because essentially the company hasn't managed ESG particularly well. And in the way that they haven't managed, in the form of that poor management has really come down to a simple thing. It's about stakeholder engagement. And I think this is the key thing, whether it's Google and inaudible and people who've been very critical of Google. Or whether it's base camp, which is the other end of the spectrum in terms of size of organization. The thing that unites both those organizations is stakeholder engagements. And so whether you're a CEO of a five person company, or a 50,000 person company, the same task is really beholden to you, which is you need to map out your stakeholders, and engage with them, talk to them, find out what matters to them. And if it is worker rights, if it is worker conditions, then make sure you've got a strategy for that. If it is the environmental animal rights or child protection or human rights or the environment, then make sure you understand those things. And as you move through your stakeholder pool, as you move from employees to customers to prospects to the market to investors, and your order of those things might be different from mine, I think that's okay. But you still map out what's important to each of those groups. And where you find strong commonalities, that's where you have the biggest risks. And so the key to this all I think is stakeholder engagement. The SDGs are great if you're a 100, 000 person organization or if you're a government. But if you're not either of those two things, and I think SDGs are maybe somewhat helpful to frame your thinking, but really it comes down to something much more basic. Talk to your stakeholders, find out what they care about, find out where they are unhappy about the impact that your organization might have on some of those values, some of those things that they hold dear. And then work out a strategy of engagement with those people to then identify the metrics and manage those metrics. And that's how you manage ESG.

Jon Prial: It's amazingly broad. I mean, you touched on Tim inaudible We actually spoke to her a couple of years ago, a fascinating story about the base camp. It was so interesting because engage, engage, tell us, talk about things. Oh, well, not anymore, because now we don't like what you're talking about. That's hard to open up Pandora's box and then say, we decided to close the box. And obviously third of the people walked out. So since it's so broad, is there a consensus that what a good set of reporting metrics should look like? Is it just what every company needs like you talk about, engaging with stakeholders? Or is there some something that could be aggregated at a company level, for example, for measuring?

Charles Radclyffe: Yeah. And I think this is the challenge and this is what everyone in the ESG space is working on trying to solve, is unifying that reporting structure. I mean, I think the truth is, is that organizations should be reporting meaningful things. And I think right now, the best that organizations can do is be upfront, communicate, and don't just talk about the marketing, show the detail. So in my niche, which is AI ethics and AI governance, what I see a lot of is organizations publishing their value statements. And there's been loads of examples of this. Samsung at the beginning of the year published theirs. It was a beautiful, highly produced, glossy, couple of page brochure. HSBC have most recently done theirs. And again, these are marketing statements. We believe in quality, justice, balance et cetera. I mean, what's happened is they got a bunch of senior people together in a room, they've used lots of post- it notes, agreed on the six or seven least offensive words, and then they got someone to produce it into a marketing documents. That's not ESG, that's marketing. And I'm not saying it's of no value, but it is of very, very limited value in terms of understanding does this organization carry risk or not? And so what you need to do instead is you need to look beyond those principles and you need to find protocols. You need to find substantive governance, substantive activities that you can communicate to the market. And so in our research, in our little tiny niche of ESG, what we focused on is looking for the evidence of those things. And the companies that we rate well are the ones that are able to surface out the best. And the companies that we don't rate well are the ones which are a little bit more opaque on these questions. So I think really that's the first step an organization needs to take. There are plenty of people like me and like plenty of others in this space who run surveys, who try to tease out information where it is a little bit opaque. And I think that's a burden to a lot of people and seen as a burden. And I think organizations need to find effective ways of responding to that burden because increasingly the penalty for not engaging with those surveys is a higher cost of capital. And potentially being rated and being picked up and being seen as a watermelon,

Jon Prial: I had never heard that term before. I love it. I want you to explain it. I would've said, bleh, greenwashing, which is probably a very small subset. So please tell our audience about watermelons.

Charles Radclyffe: Yeah, well, I can't take the full credit for it. Although I guess I could have taken credit for connecting it into the ESG worlds. So I was a very bad project manager back in the day. I'd worked for the Royal Bank of Canada, and I worked for a few other organizations. Entrepreneurs are not necessarily known for their project management skills. You've got to be all things to all people, and it's a very different discipline. And I remember somebody had a poster up on the side of the wall, saying, no watermelons here. And I looked at it and I didn't really understand the context. And I thought about it a little bit more. And then I suddenly realized it was a project management reference, projects which look green on the outside, but are anything but once you peer behind the executive summary. And being in project management, my corporate career being very familiar with watermelons, it was very natural leap for me to make when I got into ESG world, because the ESG world is full ow watermelons. And I think that's exactly the answer. In the UK, when we had Boohoo, which was a fashion brand, which on the face of it looked like a very environmentally friendly organization, but again, worker's rights were abysmal. Amazon with drivers peeing in bottles because they're not given work breaks, but the company making a big song and dance about its environmental credentials. There's lots of examples of watermelons. Essentially, it's an organization that looks green on the surface, scratch that surface and you get into that yellow layer. And then when you start cutting it open, you find that actually it's quite deeply red in the middle. Yeah, that is a watermelon. They're delicious, but crosstalk toxic.

Jon Prial: It's interesting because we often think about horror stories, but most horror stories people would naturally talk about at a dinner or whatever would be security breaches. And an ESG horror story does have financial implications, does have negative... They always say, if you like something, you tell one person. If you don't like your company, you tell 10 people. This is one of these things that hundreds and thousands of people will find out something bad about the company, it cannot be good for the bottom line.

Charles Radclyffe: I think there is a concern from people that social media and blogs and living in a culture right now where people feel a little bit more able to communicate the negative about their day- to- day work or the environment they work in. I don't think that's the kind of trend that we should be worrying about. I think the trend is that organizations haven't developed that muscle whereby they engage with stakeholders. It's really that, and some super large organizations. I mean, let's take Facebook with nearly 3 billion users. And if you listen to Facebook, they will say things like, well, we've got three billion users. I mean, we can't go and talk to everyone, can we? And I don't think that gets you off the hook. I think in fact, that makes it more acute, more important that you have an effective strategy. And if Facebook can't find a way of scaling their stakeholder engagement, then it's certainly too big an organization. And I think that's really the challenge and actually the tension point. So quite assigned to anti- trust, quite aside from all of the other issues facing the tech industry right now, I think this is a question which responsible leaders are going to have to grapple with. How do we build effective strategies to engage with our stakeholders? And those that do will deal with the challenges that are brought to them in an effective way. And those that don't, will have people acting out and speaking out and rightly so.

Jon Prial: So Charles, is it fair to say that ESG market is fully established? The space is odd, there's different types of data. Does each company do the same thing? How different is this than other type of analytical companies? Can you let me know about that?

Charles Radclyffe: I think there's two aspects to what we do, which is a different way of thinking. So essentially, as an investor, when you're buying ESG data, because that's what you would be doing at Fidelity or BlackRock, or as a consumer, if you go to Yahoo Finance, you'll see Sustainalytic's ESG scores for many of the companies that are out there. What you're essentially doing is you're buying a pre- baked cake. And that pre- baked cake contains the ingredients and the recipe and particular flavoring that the person who put it together wanted you to consume. But what you're doing as a fund manager is you're essentially buying multiple cakes. You're buying a cake from one of many providers. And then you're trying to deconstruct that cake back into its core ingredients, and then bake your own particular flavor cake. And as you can imagine with cakes and with ESG scores, it's impossible, and that's really the problem. And there's nothing wrong with... the classic line that people in ESG space have you used to inaudible the problem is if you buy credit rating data from different data providers, there's a very, very high correlation. And if you think about it, that's what you want. Credit scores should be objective statements of fact, that the answer should be in the numbers. But if you buy ESG ratings, then there's a very weak correlation of 0. 61, competitor 0. 99 correlation. So it's pretty poor. So, you've got two ESG providers giving you a very different score. If you go to Ethics Grade and look up some big companies, you'll see a very different picture to if you buy the data from MSEI. And it's not because we're right and MSEI is wrong, it's just, we're looking at different things, MSEI. And to the extent that we are looking at the same things as MSEI, we maybe weight them differently. And that's not just because we're looking at AI ethics and MSEI are looking at more classic environmental sustainability. We also look at sustainability, but we look at it from a different angle. But if you buy that data from S&P or Moody's, who are looking at it from a very similar perspective to MSEI, again, you will see a different picture. And I think the big fraud, for want of a better word, in the ESG space is that there's this denial that people who are rating, people who are evaluating organizations are not bringing their subjectivities to the table. And essentially that's the kind of thing we need to blow apart, which is when you're buying Ethics Grade's data today, you see our ratings. You're buying Charles Radclyffe's view of the world. And my view of the world is not representative of everyone. And so what we need to do is the next step. We need to offer personalization. We need to be able to offer people their own view of the world. Yeah, John and I, you've had a really interesting background in the data analytics space, which is an industry I spent a lot of my career. We're going to see a lot of the things that I've used on... we're going to see in common. There's going to be differences as well. And I think those differences are really important to highlight. And it's not about shaming people. It's not about telling people they're wrong. It's about just helping people align their capitals or values. So that's what we do at Ethics Grade on a super niche. I'm not trying to pretend that what we do is expansive of all of ESG, but what we do does cover the E, and it does cover the S, and it does cover the G. I mean, a lot of ethic technology governance is about G, but of course, bias and discrimination around data is a lot about A. But also, so is the impact of automation on employment and the nature of employment. That's definitely an S. And the big, dirty secret of the AI industry as you all know is the fact that this stuff is intensely energy consumptive. We're building electricity and turning it into fancy maps. The question is, where are you doing that fancy maps? Are you doing it in Bangalore or Boston? Because those two data centers will have very, very different energy footprints. One will be cold and the other one will be greener. And also, are you giving your engineers controls to be able to maybe design the training, such that it's maybe a little bit more optimized? So are they controlling where it's getting done? Are they controlling the optimizations in play? That's an E question. And so even in our little niche, we cover E, S and G, but by no means the whole spectrum of things.

Jon Prial: Well, I got that Venn diagram in my head. So some of the things that will drive this of course will be regulations. And I'm going to guess that I'll ask you to comment here. GDPR is really a starting point. What's going to be happening in terms of future regulations probably coming out of Europe first?

Charles Radclyffe: Yeah, so specifically around AI. So GDPR, I think to a lot of people, they store as the beginning and the end of the regulatory intervention from the European Union. And I guess those people will be deeply disappointed by what's happening now. So in 2019 inaudible underlying the new European commission president announced, I would say really sweeping changes to the regulatory regime around tech in general, across the European union. So there's a very bold vision. And I think it's worth understanding that vision because everything else fits into context. So the vision is really twofold. Firstly, there should be a European single market for digital. So Europe is a really great place to live and work. Unfortunately, I've been kicked out by my fellow compatriots, but it's a great place to live and work because you can travel freely, you can work freely, you can trade freely. I think that's a really important thing, but in a digital sphere, that's not perfectly true. And I think one of the challenges that's happened in Europe is that companies are incorporated in Ireland or Luxembourg. They've then performed services in Malta or London. And then it's turned out that the consumers in those places who want to raise grievances have found it quite difficult to do so. I'm not mentioning anyone by name, but let's mention Uber, for example, have definitely exploited that problem. So the digital single market as a way of addressing a level playing field across the union. And there's lots of parts of the digital services apps and the digital markets, which are two other pieces of legislation, which are aimed at addressing some of those challenges. The other aspects that's the prevailing thought process in Brussels is that Europe has maybe to some people lost its way in relation to the tech duopoly we have between the United States and China. And there's a lot of Europeans who are trying to forge a different path in relation to those two economies and this idea of a European economy of tech. And a European tech industry is something which is deeply attractive. I mean, Goodness knows how that's going to be achievable, given now that UK has left European orbit. And UK is a very big part, a constituent part of the tech industry, particularly the AI ecosystem. But leaving that aside, this idea that in the seventies, we achieved this with Boeing threats. Boeing was essentially the only air craft manufacturing when all the small independents got bought up or went out of business. Now Europe sees a very similar challenge with relation to Google, Facebook and other organizations. And so what we're going to see is essentially two things. We're going to see a lot of regulatory intervention. That's the stick. And we're going to see a lot of fiscal stimulus and that's the carrot. And whether it works, whether you're Fondalaine and Tiere Briton, the trade minister, whether they achieve their goal or not remains to be seen. But it's going to be a very, very interesting time. And of course the foundational layer, which is GDPR, is now being built on. So in the last few months we've seen four really important pieces of legislation coming out of Brussels in draft form. One is the Digital Governance Act, which looks at public data sources. So GDPR looks at essentially private data sources, private data. The data governance that looks at public data sources, so that's going to be very interesting in terms of taking things like health data and finding ways of exploiting the value in that without exploiting the data subjects. The Digital Services Act and the Digital Markets Act address some of those common markets questions I raised. And then most recently on the 21st of April, the AI Act draft was published. And essentially what this means is anyone doing AI in Europe will have reporting and disclosure opportunities. And if you're doing high risk AI, those opportunities become mandatory requirements.

Jon Prial: Yeah. It's so fascinating to me. Let's talk about the different... if you could take us through the different risk AIs, unacceptable and limited, and you mentioned high risk. Could you just quickly go through some of those risks of AI? I think it's important for our audience to understand the different types that are out there.

Charles Radclyffe: Yeah, sure. So I mean, starting point is that some stuff has made banned, and I think that's probably a bit of a surprise to most of us who are waiting for this thing to happen, because we've been calling for red lines and we didn't really expect to get our wish. Some types of AI had been banned. I mean, it's a pretty small list of things, but I think it sets a precedent. So if you are out and out trying to manipulate people, game people, then what you're doing would no longer be allowed, it's simple crosstalk

Jon Prial: So what China is doing in Shanghai for doing social credit? So, that's banned?

Charles Radclyffe: So manipulative and more in terms of if you go to the platform that was deliberately trying to game people into subliminal manipulation, that's strictly banned, 6% of global turnover penalty.

Jon Prial: Ooh.

Charles Radclyffe: And I think that catches. Yeah, so steeper than GDPR. So, that catches like against the worst acting. Social credit's an interesting one. And so social credit, what's been prohibited in the draft legislation. So we'll see what makes it to law finally, so public sector, social credit. So if you're a municipal authority and you want to start a credit score to see how well your citizens are putting out their garbage, and then giving them tax breaks as a result of that, then yeah, you can't do that anymore. That would be banned. So I think what all of Europe has done has really put a red line to say, we're not going to build a platform like China. And we could probably talk a whole podcast on that question alone, because that also sets you up in a very difficult path when it comes to automation and alternative ways of creating incentive structures for people in a non abundance, known scarce economy. But leaving that aside, the more here and now high risk AI. High risk AI is basically, again, really quite limited in scope, but the most interesting thing which is going to touch every organization, is HR. I think what's so interesting about that sector is you've got a few watermelons. I won't mention any names, but you've got a few watermelons in play of large organizations that provide HR services to European companies, which look on the face of it quite unoffensive, but probably don't have the right controls in place behind the scenes. And what's really interesting about cloud HR is, and I think some organizations like Workday have really understood this challenge really well, which is they may have thousands of customers or tens of thousands of customers, but each of those customers have thousands if not tens of thousands of employees. So they have millions of people whose employments, whose redundancies, whose training programs, whose hiring decisions' all running on their platform. So if they screw up, then it's potentially going to be really impactful and cost them their business. So I think the commission have understood that quite well. And there's also a lot of safety usage in marine and automotive, and the aerospace industry is also caught in that. And then for everything else is essentially... there's a another category like California have done. You've got to disclose. If you're providing an AI interface like a chat bots, then you have to disclose you're engaging with a chat bot. That seems pretty sensible. And there's also a deep fake catch, which is if you're producing content that might masquerade as being human created, you need to label that as accordingly.

Jon Prial: So you've got a traditional set of customers. It sounds like with all these governmental and there are different... Are there other governments, other customers? And I've seen Swiss Digital Initiative. Are there more customers throughout that, in this space for someone like Ethics Grade now?

Charles Radclyffe: Yeah. So I mean, there's a lot of not- for- profits. I mean, Swiss Digital Initiative are doing some really great work around data privacy and online harms. And there's lots of not- for- profits that are looking at different angles of this. Essentially what the European AI Act means is if you're in the business of selling AI within the European Union and it's going to affect European citizens, which is every tech company, let's face it, then if you're in the high risk stuff, you have to do some conformity assessment and some mandatory reporting and disclosure. And if you're not in a high- risk category, and of course, everyone's going to be telling everyone that their particular, special shape of AI system doesn't make it into the rules for X, Y, and Zed, that there's a very strong, voluntary scheme that essentially the commission is trying to create. And that's really what Ethics Grade is set out to try and do. So when you buy a fridge and you get that nice energy sticker on the side of it. Or you buy a car, you have the end cap rating scheme, which shows super safe cars versus those which just inch over the regulatory requirements. We're trying to create the same ecology within Europe. So I think it's a great opportunity. A lot of corporates, when the European commission did that consultation in 2020, I read a lot of those consultation responses from industry. And a lot of companies said, oh my goodness, it's a big burden on us. It's a lot of work for us to do. And frankly, that's going to get in the way of innovation. Frankly, that's going to get in the way of providing great services to consumers. And essentially what we've tried to do is build something which is minimal burden to industry and also is a great experience for consumers so that people can discern... So you might not care if you're going to buy a new mobile phone, whether you buy an Apple or a Huawei or Samsung. You might not care whether the AI ethics is under control in those companies. You might buy on price, you might buy on features, you might buy on the design. But I think some people, people maybe more like me, will care about those things and it will be a buying decision. I think we want to create that marketplace that enables people to do that.

Jon Prial: And Apple's made their bed. They've declared that people are going to care about privacy, and we're going to stick to our guns. We have a company that's really dug their heels in on that. So let's talk a little bit about your business. What's your revenue model? How does Ethics Grade get paid?

Charles Radclyffe: I guess, we are at an advantage in relationship to a lot of not- for- profits who've also thought about the same issues that we have in essentially labeling and rating companies and products on their performance. And I guess the advantage we have is we've got very strong commercial discipline, and that is because we've realized that investors need this data in order to make the right investment decisions. And I think the gifts that COVID has given us, not like COVID has given us many gifts, but the gifts that it has given us is that investment portfolios have really shifted from oil and gas and mining focused to tech focused. And what that has meant is that fund managers have realized that ESG data that they were buying in are very, very good at understanding environmental risks and human rights risks, and not so good at understanding technology governance risks, which we're covering. So for us, our revenue model is we license our data. We provide a data feed of our API to patrons and asset managers. And we think that's a really great business. Those people are using our data essentially to build trading strategies and they therefore rightly pay us for our research. And everyone else, we give it for free. So if you're a consumer or you're a journalist, or you're an academic, or frankly, if you're anyone other than somebody trading using our data, you can come to our website, download two page scorecards on each company that we cover. And we provide that as a free of charge service.

Jon Prial: That's tremendous. How much of what you do is automated today will be automated in the future? How do you view the analysis and collection of data over time?

Charles Radclyffe: So I guess that's why I'm a little bit relaxed about not forcing people into reporting in a very consistent format. I think we can be a lot more permissive in allowing organizations big and small to report how they want to report. And then our job is to be very good at hoovering that data up and processing it and analyzing it. I guess the benefit I've got personally is I've run technology teams before. I've worked in financial services, I've run startups. And I know that the worst thing a company like mine can do is just go from by the build attack and not build enough training data to be able to automate the process. So what we've done is really designed a process. It's very manual, there's lots of Excel. It's not what you'd expect it to be when you see an AI company, but what we're doing is we're throwing out, to use a term you'll be very familiar with, the data exhaust. The data exhaust they're deliberately throwing out is the stuff that we know that we can train a model with. And at the end of this quarter, we will then have three data points per company on each of the 171 questions in our model. And we cover 232 companies so far, so that's quite a good chunk of data. And we're pretty confident that will be enough to start to get some leverage. And really that's what AI or any machine is. It's a form of leverage. So today, it takes us about one day of our analyst time to rate a company. And we think we can get that to about seven companies per day, per analyst. More than that, and I think we'll be questioning quality, but I think that's something which we can do in the next few years. And that's really what we can do to scale. And essentially what we're doing is we're playing the same game that many other people in the ESG space have played in terms of building a research team, finding ways of automating it. There's nothing special about that. The special source we have is essentially the data model that we're building to hold all this data in enables us to arbitrarily cut that data, personalize the data feed, according to your values. We're building a dating website, essentially, a matchmaking service. And that would be very hard to do in the way that we understand other ESG companies have done it. So, it's the difference between... On the UK, we still have terrestrial television. You've got four channels or five channels. That's really the ESG space. And you can buy one of five channels and you get the programming. What we're offering is YouTube.

Jon Prial: Over the top, I love it. So, it's funny, we spend a lot on talking about Europe, but clearly this is applicable everywhere. Europe may be driving this. They did a lot of driving into the GDPR, and this AI Act is amazing, clearly, it hasn't picked up globally, particularly in North America. So Charles Radclyffe, this is just a fantastic discussion. I wish you the best of luck and thank you for spending the time with us.

Charles Radclyffe: Thanks, John. Thank you very much.

DESCRIPTION

Charles Radclyffe is our guest on this episode of the Georgian Impact Podcast. He is a partner at EthicsGrade who specializes in evaluating companies' ESG credentials. In other words he hunts for watermelons, companies that look green on the outside but are anything but in the middle.


You’ll Hear About:


● EthicsGrade and the work they are doing around environmental social governance credentials.

● The difference between corporate social responsibility and ESG.

● How ESG relates to shareholder return and risk management.

● Digital ethics and how it fits with ESG.

● The need to engage with stakeholders and find out what matters to them.

● How the AI Act will affect those working in high risk areas.

● The different types of customers that EthicsGrade works with.



Today's Host

Guest Thumbnail

Jon Prial

|
Guest Thumbnail

Jessica Galang

|

Today's Guests

Guest Thumbnail

Charles Radclyffe

|Partner at EthicsGrade