Scaling with AI

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Scaling with AI . The summary for this episode is: <p>As an early or growth stage company, scaling is always top of mind. Skills are scarce and expensive, so machine learning and AI have to be the foundation you build on. And balancing this opportunity with the challenges it brings is key.</p> <p>On this special edition of the Georgian Impact Podcast we’ll be getting insights on this fascinating topic from Alistar Croll, Beckie Wood, Leslie Fein, Harper Reed, and Jana Eggers. A group uniquely qualified to bring an expanded view of the role of an AI Project Manager.</p> <p><strong>You’ll hear about:</strong></p> <ul> <li>The unique challenges of scale-stage growth, and what it means to software developers.</li> <li>How AI Product Managers must navigate between: ethics, technology, business acumen, statistics, design, and customer development.</li> <li>The importance of Product/Market fit, and how tools like V2MOM can help.</li> <li>How the wrong team, set of testers, or almost anything else can make ML and AI products behave badly.</li> <li>The need for a diverse team working towards a common goal, and why sometimes you need to step outside your comfortable tech bubble.</li> </ul> <p><strong>Who is Alistar Croll?</strong></p> <p>Alistar Croll is a visiting Professor at Harvard Business School and teaches a course entitled “Big Data and Critical Thinking.” Alistar has been directly involved in the launch of major conferences such as: O’Reilly’s Strata, Techweb’s Cloud Connect, and Interop’s Enterprise Cloud Summit. He graduated from Dalhousie University with a B.Com (Honours) and an advanced major in Strategic Marketing.</p> <p><strong>Who is Beckie Wood?</strong></p> <p>Becky Wood is an advisor at VSCO, and was recently Vice President of Product Management and Insights at Pandora. While there she led both music and non-music content strategic product expansion. Along with her team, Beckie helped launch podcasts for millions of listeners and deliver personalized recommendations. She also provided data-and-user research insights that drove product strategy and prioritization decisions.</p> <p><strong>Who is Leslie Fein?</strong></p> <p>Leslie Fein is an advisor at a firm based in San Francisco called Enjoy the Work. Leslie and Enjoy the Work partner with CEO’s and founders of startups through seed and even as far as series C and D funding, teaching the craft of entrepreneurship.</p> <p><strong>Who is Harper Reed?</strong></p> <p>Harper Reed is a technologist that predicts the future for a living. As the CTO of the Obama 2012 campaign Harper brought a tech mentality to a political level. As the co-founder of Modest Inc. Harper garnered the attention of PayPal with the technology he developed leading to PayPal acquiring them only a few years after launch. His roles as Head of Commerce and Entrepreneur-in-Residence at PayPal helped him guide his team into the future of e-commerce.</p> <p>Harper is an MIT Media Lab Director’s Fellow, sits on the advisory board for IIT Computer Science and the Royal United Service Institute, and is on the Cornell College Board of Trustees. </p> <p> </p> <p><strong>Who is Jana Eggers?</strong></p> <p>Jana Eggers is the CEO of Nara Logics, a neuroscience-based artificial intelligence company with a focus on turning big data into smart actions. Her understanding of customers and technology comes from technology and executive positions at Intuit, Blackbeard, Lycos and as CEO of Spreadshirt. Jana received her bachelor’s degree in mathematics and computer science at Hendrix College, followed by graduate school at PRI and supercomputing research at Los Alamos National Laboratory.</p>
The new challenges AI bring to the world of product management.
02:55 MIN
Why finding product market fit starts with aligning on objectives.
03:08 MIN
How to think about testing with an AI product.
04:00 MIN
Why it's important to have diverse teams working on ML products.
03:53 MIN

Jon Prial: So here you are an early or growth stage company. Definitely not a startup. You are on your way. Now, there are lots of things to think about, but what might be top of mind is how are you going to scale? You know that machine learning in AI has to be a critical foundation to build on. Skills are scarce, skills are expensive, but there are lots of available resources. It is doable. What matters is how to balance this opportunity and the challenges it brings. Today, we'll be drilling down on this critical topic with a little help from our friends. What's the takeaway? Well, we hope you find a new found, and expanded view of the role of a product manager. As a matter of fact, let's get it right, right now. An AI product manager, I'm Jon Prial, and welcome to the Georgian Impact Podcast. What happened to the good old days of being a startup? Technology evolved, so a startup didn't need a data center. They didn't have to buy a computer, just get the compute power with a credit card, and the world changed. No matter what happened with software as a service, you would expand that access to ERP systems, accounting, marketing automation, Salesforce automation, and so on. It also changed software development. SaaS eliminated The needs of managing and rolling out versions Of software that then had to be implemented or installed at those data centers. We went from waterfall development to agile development, but now the world is changing again and things are getting harder. Alistair Croll is a visiting professor at Harvard Business School, and he teaches a course entitled Big Data and Critical Thinking. He also partners with Georgian on a conference we aptly named Scale Tech. It's specifically targeted at the senior executives at growth stage companies, bringing together investors, founders, and experts to tackle the unique challenges of scale stage growth and what it means to develop software. He's really nailed this new issue.

Alistair Croll: Once upon a time, you used to write code and produce data. Now you ingest data and produce code from it.

Jon Prial: You see algorithms were thought through, coded, tested, and deployed. They could be inspected. Once we moved to a SaaS world, the premise of building a product had changed, but not that much an algorithm was an algorithm. It was understood. But Alistair's point about producing code is about machine learning and artificial intelligence. That's the profound change we'll be talking about today. How do you do it right? How do you know that it's working correctly? And what are the risks?

Alistair Croll: For example, how do you even get a gut feel for like what kind of error rates are achievable and what error rates are acceptable for your application? You just don't know. There's no clear, right answer. Fraud detection, maybe you risk your product recommendations may be wrong. You may have liability issues because occasionally it recommends the wrong product. And there's a range of sort of reliability here. Getting the wrong product recommendation is not a big deal, it's just a refund. Fraud detection, you might lose some money. Autonomous vehicles hitting one another, that's a huge problem.

Jon Prial: Now, Alistair's is bringing up two issues. We have an error rate, but we also have the error severity. One sounds like code hygiene, and one sounds maybe a little scarier. I don't necessarily like it, but I happily accept that an e- commerce site might tell me that because I bought a particular widget, I might want to buy a toaster because other people have done that. It might also tell me that others who bought this widget bought Lamborghinis. Now, I'll live. I might be a bit more irritated with the Lamborghini as a recommendation, and I might question if they really know me and how often this happens, actually might turn me off as a user of the product.

Beckie Wood: It was actually funny, we call them WTFs and it's like a metric to just be like," Oh no, that was a terrible thing that we played for this end- user." We did monitor the percentage of WTFs and kept those to a minimum as much as possible.

Jon Prial: That's Beckie Wood, currently an advisor at VSCO. She was recently Vice President of Product Management and Insights at Pandora. Now as a Pandora user, I know they do better than these toaster recommendations I talked about. And I do like that, but I like even more her fabulous new metric, but is a WTF sufficient? Rejecting me for a loan application because of some non- transparent black box ML algorithm that could be a lawsuit, perhaps. My self- driving car hits a bicyclist because to avoid false positives the algorithm turned up the," Hey, let's not irritate the passenger by slowing down too much dial." I don't know that bicyclists considered themselves a false positive. So I think we frame the problem now. From a bad recommendation or two to a lawsuit or more, is there even a way to measure that? I mean, at a minimum, how do we ensure that issues such as these are brought to the forefront of product management thinking? When writing our principles of applied artificial intelligence, and our principles of conversational AI, we came close to adding a simple idea. We called it Don't Be Creepy. We decided to be a bit more professional in our writing, but that is one of the reasons why scaling your startup in this new world of data ML and AI is challenging. Now, I'm really glad we didn't put creepy in our writing because it just came up short. The toaster, or the Lamborghini is creepy. Although maybe creepy is in the eye of the beholder. Creepy is just badly using data that could upset a user, not sufficient. You see, one mistake can be costly. With ML and AI solutions proliferating everywhere, we have to recognize the impact of getting this wrong. It comes down to this new world of product management driven by how the data is used. Here's Beckie again, in her 12 years at Pandora, the company scaled from a total of 60 people to where she managed a team of 65 analysts. Learning from user interaction became paramount. Learning that people wanted Christmas music, wasn't the same thing as hearing the music from a band called Christmas, this helped her team and the product evolve.

Beckie Wood: We were extremely hypothesis driven. So if we had a hypothesis of what type of either implicit or explicit signal would be valuable from our users, we made sure to test that and run those metrics.

Jon Prial: So Beckie is talking about signals, and that's signals from the data. In a nutshell, that is a critical part of product management in an AI world. Why is it so different?

Alistair Croll: You need to triangulate across ethics, and technology, and business acumen, and statistics, and design and customer development.

Jon Prial: Alistair is right on point here. There are many ways that technology can help bring all of these points together into a product management plan. It's not that hard, right Beckie?

Beckie Wood: At Pandora, we actually had different product managers. We had kind of traditional product managers who worked on front end products, things like the front end UI. And then we had AI PMs who worked on specifically working with our machine learning scientists on those specific problems. I think the traditional product management is pretty clear. You're figuring out what you're building while you're building it. You're looking at front end use cases all the way to back end use cases, partnering with engineers, getting things built. On the AI side, really it's about thinking about the problem set for the recommenders and partnering very closely with your machine learning scientists. And the reality is like the roles of what those two folks do is pretty aligned. They're different techniques, meaning the product manager is defining what the problem set is and the scientists are actually going out and implementing the solution. But it's much more of a collaborative experience. In my experience, a lot of the product management around AI is about product definition. So it's about helping the scientists singularly think about what problem they should go solve and then partnering very collaboratively with the scientists. We actually hired PMs that were back end PMs and AI MPs in a little bit of a different way. They were more technical, they understood modeling and basic machine learning technologies.

Jon Prial: Unless I get too caught up in this discussion of an AI product manager, let's step back. Some of the basics of traditional product management that Beckie refers to are still required and have not changed. You don't just throw money at AI by hiring a couple of data scientists and waving your hand.

Beckie Wood: A good company should already know what they're building and why they're building it, and the best companies do have really a clear mission. But I think for most companies, particularly smaller startups, even growing into larger ones, it's very hard to find great product market fit from the beginning. And so really people are often kind of like scrambling to figure out that product market fit.

Jon Prial: So getting the product market fit is one of the critical pillars of success for a startup and obviously a cornerstone of a product management role. One tool that we've been advocating is V2MOM, let's touch on that briefly.

Leslie Fine: I am Leslie Fine, I'm an advisor at a firm in San Francisco called Enjoy The Work. We are a partnership that spends all of our time working with CEOs and founders of startups, through see through even as far as Series C and D. Teaching the craft of entrepreneurship.

Jon Prial: V2MOM or other tools got to have one, right Leslie?

Leslie Fine: There's OKRs which become super popular, management by objectives, BHAGS. There's a dozen different ones that are popular. At some level I don't care. The point is that you should have one. Any of them are better than none, and none of them are useful on an Island. These things are only useful if they become the mechanism by which you check in on your company, and you set decisions, and you hold each other accountable. So the high level doesn't matter.

Jon Prial: And V2MOM?

Leslie Fine: V2MOM, it is a terrible acronym, but it stands for five things; vision and values are the V2 and then methods, obstacles, and metrics. There's a couple of reasons I really like it. Most of the frameworks are great for accountability, but before you hold people accountable, you should be aligned on what's important to the company as a whole, where it is we're going. Our mission and what values or principles we're going to bring to it as we make these decisions. The other reason I like it is because I've just seen it work.

Jon Prial: I really like the thoughtful approach that will translate business needs into a product. V2MOM originated in Salesforce. com. So let me share with you, Marc Benioff's recap of its use," The vision helped us define what we wanted to do, the values established what was most important about that vision. It set the principles and beliefs that guided it in priority. The methods illustrated how we would get the job done by outlining the actions and the steps that everyone needed to take. The obstacles identified the challenges, problems, and issues we'd have to overcome to achieve our vision. Finally, the measures specified the actual result we aim to achieve. Often this was defined as a numerical outcome." For a lot more detail on V2MOM and how it could assist you with product market fit and so much more, we've put links in our show notes to a Georgian Growth Podcast with our own Evan Lewis, who has an extended interview with Leslie. I know you'll enjoy it. So that was a short, but it was an important diversion of the tools, but let's get back into the role. We have touched on traditional PM, corporate alignment, and we've begun to dig into this AI product management role. We have to make sure the product works and we have to make sure it is strong product market fit, but let's go back a bit to creepy and start to think a little bit more about how the wrong team or the wrong set of testers or almost anything else can make an ML, an AI product do some pretty bad things that could seriously affect the company's success. Facebook, and I'm not going to spend a lot of time on Facebook specifically, but here's a bit of a mess that was made because of some very loose security controls in Facebook years ago, that this year has been fixed.

Harper Reed: We had created this API that would take your Facebook friends, and it would basically slurp up everything. And since this was a while ago, we also had friends of friends. So we had your friends, your friends' friends and your friends of friends. And so like all of this stuff, we'd slip it all up. We built this graph and then we did this modeling that allowed us to determine who your best friend was. This was a pretty exciting thing that we were able to do that worked out super, super well. We used this in a few things, but in our testing of this, we went through and we had is this creepy? And so we would test this over and over and over again, amongst our small kind of user experience people, all of our people in the office, it was an open floor plan office with a thousand people in it. It was pretty gnarly, but we tested with as many people as possible. The question that we asked was, is this creepy? Because one of the things we found is that people were not expecting as much data that they were shedding from these kinds of small social experiences. I really like to think a lot about like, what is that creepiness? How do we measure that creepiness? I actually think the best way to do this is to be very thoughtful about what is creepy for you, figuring out how you and your team define creepiness and then defining that, and actually being very specific about making sure that you have a broad definition of creepiness is also very helpful.

Jon Prial: That's Harper Reed. Harper was the CTO of the 2008 Obama campaign and he and his team were smart enough to test, test and test before rolling out a pretty serious algorithm. The algorithm was fine, but how it would roll out and how people would feel when they saw what was happening, turned out to be pretty high risk. The key here is if you don't think about the impact of the ML and AI you're implementing, you cross a line. A creepy line, a line of regulation, a legal line, and none of this is tech, right? This is people. Any thoughts Harper?

Harper Reed: Humans for quality, machines for scale. One of the things about my career is I've always been very focused on humans. I didn't know a way to actually say those words. I think that there's this interesting thing of how do you scale technology? But there's this interesting thing about the humanization of this technology. How do we take all of this cool technology and make sure that we're humanizing it?

Jon Prial: Yep. Machines are critical in scaling, people for quality. I really liked that. The writing of algorithms has evolved to the creation of machine learning models. We have new AI solutions cropping up everywhere. This is all fantastic stuff, truly fantastic. But Harper's point is about humanizing and recognizing that you need input from people for quality. Simple, right? No, not really.

Harper Reed: One of the things that I found is there was a lot of very well- intended technologists who are coming into very interesting rooms, where they were the expert on technology and they were selling a technology that they had invented or they were working with or what have you. But there was a couple of voices that were missing from this. The first voice, and I think this is something that really struck me was there was no voice that was talking about what could go wrong.

Jon Prial: Not like it's hard enough as it is to be a successful startup.

Harper Reed: We rarely are like," I'm going to start a company and it's probably going to fail, et cetera." We say that as a joke, but we really, in our heads, we don't really say, we don't think that, we don't mean that. We know the statistics on success, but we think," Oh, this is going to work perfectly. We are so smart. We have a great team, the best team I've ever seen, the best team I've ever worked with."

Jon Prial: But seriously, we all have a need for diversity. If there's a team collecting and building ML tools around a large dataset and rolling out an application, remember that this is an application that wasn't built by an algorithm that a programmer created. It's an app that came out of the data that went into a black box. To be very blunt, here's the net of it; if you have a team of white men, perhaps even with the same educational demographic, you have an increased risk of failing.

Harper Reed: So the way I've done this in the recent past, in the past couple of years is to make sure that my teams are actually relatively diverse. To make sure that you really are optimizing for diversity within those teams, especially around data, AI, products that people are going to be using. When I say people are going to be using, it's not like some internal analytics product, although diversity is going to help there as well. But I'm talking specifically about when you're building products for an external customer. What we have found, especially when it comes to these kinds of tooling around data and AI is to really make sure that the team that is building that is as diverse as possible. That way we don't run into some of the most terrible things on the internet, which are all of these really bad experiences that these companies have, where they launch something and some absolutely terrible thing occurs. Like the very famous Google tagging black people as a gorilla or all of these very famous things. So when we're building these things, we have to make sure, and we started this thinking about this as a metric to how do we get out of creepiness? It's just expand the people helping test these things out.

Jon Prial: Startups are still exciting. It's easy to move fast, break things, do no evil, all the catchphrases, but we have to think about the consequences that we've addressed before. Recommending a toaster that someone doesn't want is not the same thing as taking a right turn onto a one- way street going the wrong way.

Harper Reed: So often that we talk to startups, we talk to, I mean, even our teams where we think we're doing the right thing and it's super hard to figure it out because oftentimes the only way to really get a good view of that is retrospect. So I do think that we have to be very, very, very thoughtful around how do we define, how do we think about unintended consequences?

Jon Prial: Hey, it's hard to get out of your comfortable tech bubble and it's bigger than tech. As a manager, one of the best pieces of advice I ever received was that I should recognize there are people on my team that I'm more comfortable with and some people that I'm not. So go make the effort, Jon. Connect with those that you aren't more comfortable with, recognize and cultivate the talent that's probably not quite aligned as you are, and you'll be leveraging a far more diverse set of talents.

Harper Reed: I think the trick is, is that we need to expand those conversations. We need to figure out how to bring those people into the conversation. Oftentimes I know that I have felt this, I'm always scared to interact with some of those folks. The reason is because oftentimes I feel like it's not a conversation. Well, I have a quick hack for this. I think it works most of the time, but the hack is introduce it. Pull that elephant out into the open, just be like," Look, we need to have a conversation about this because what I was thinking is that we're going to build this, but I want to know why we shouldn't, but not why we shouldn't like we should stop immediately. But how do we build it? What is the process we go through to get to the end?

Jon Prial: The process should be a process that's both internal and external. Think about the product coming out the other end, remember V2MOM. As we close, let's hear from Jana Eggers. Jana is CEO of Nara Logics, and her company focuses on explainable AI that you could trust. The team that your AI product manager is leading has to be aligned to the top, has to be aligned to all aspects of your company's strategy. Jana has a final word in what it takes to assemble the right diverse teams, and she's got a clear point of view about insourcing versus outsourcing.

Jana Eggers: I talk with a lot of the executives about this, and a lot of the executives that we work with are used to outsourcing engineering. They just hired people outside. I am not even talking about offshoring or anything like that, but they've never had engineering in their organizations. And so I'm telling them," You need to have engineers in your organization because they're going to be closest to your values and what you're doing, and this is not just going to be a project that they develop." So I think that's one thing that's important is that engineering is actually brought in and becomes close to whatever it is that you're doing, because that's going to represent the values of your organization more. And then the second thing I'd say is diversity is important, but really just in the roles. You have people, they work across functionality, it's product management, it's engineering, it's design, it's data science, it's operations. I mean, DevOps people coming together. Those come from all different perspectives, and you have to have all of that together to come in and what their background is doesn't matter. It's how they work as a team. If there's divisions between those groups, that's where you run into problems.

Jon Prial: An AI product manager, a wrangler of data, a wrangler of ideas, an aggregator of opinions culminating as a representation of corporate values and strategy. Seems like a dream job to me. For the Georgian Impact Podcast, I'm Jon Prial. Thanks for listening. If you appreciate the Georgian Impact Podcast, would you mind going to your podcast app and rating us, so that others can also find us and hopefully also find some value? Thank you so much.

DESCRIPTION

As an early or growth stage company, scaling is always top of mind. Skills are scarce and expensive, so machine learning and AI have to be the foundation you build on. And balancing this opportunity with the challenges it brings is key.