Episode 91: The Future is Robots and AI-Powered Software with Suzanne Gildert

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Episode 91: The Future is Robots and AI-Powered Software with Suzanne Gildert. The summary for this episode is: <p>Are we headed for a real-life version of Westworld? In this episode of the <a href= "https://georgianpartners.com/the-impact-podcast/?utm_source=podcast"> Impact Podcast</a>, Jon Prial welcomes Suzanne Gildert, the Founder and CEO of Sanctuary AI, a startup that builds human-like robots called synths, or synthetic humans, complete with human-like bodies and minds. Together, they discuss robots and <a href= "https://georgianpartners.com/investment-thesis-areas/applied-artificial-intelligence/?utm_source=podcast"> AI-powered software</a>, and what it will take for them to become a part of business processes. They also cover learning and training AIs. </p> <p><strong>You’ll hear about:</strong></p> <ul> <li>What it takes to make an effective AI</li> <li>The right way to train an AI</li> <li>The rise of <a href= "https://georgianpartners.com/investment-thesis-areas/trust?utm_source=podcast"> AI ethics</a> and AI rights</li> </ul> <p><strong>Who is Suzanne Gildert?</strong></p> <p>Suzanne is founder and CEO of <a href= "https://www.sanctuary.ai/">Sanctuary AI</a>, a company with a mission to create ultra human-like robots — “synths” — that are indistinguishable from us physically, cognitively and emotionally. Sanctuary is structured to explore both cutting edge technology and the ethical issues that arise from creating human-like machines. The company strives to create a micro-society in which synths can develop and be granted a safe haven as they transition into full acceptance by our wider society.</p> <p>Prior to Sanctuary, Suzanne founded Kindred Inc., an artificial intelligence and robotics company. She hand built over 30 robots to demonstrate Kindred’s core technology concept of human-robot teleoperation for reinforcement learning. She grew the company to over 50 employees and opened offices in Vancouver, Toronto, and San Mateo. She helped raise over $50 million in venture funding for Kindred from top tier investors including Eclipse, Google Ventures, First Round Capital and Data Collective.</p>
Warning: This transcript was created using AI and will contain several inaccuracies.

I'd like to do something a little different today. I sent you a little script. And so let's just trying to pretend this is a casting call and let's just read our three line. So are you ready? Yep, I am ready to go. I understand what I am made of how I'm coated but I don't understand the things that I feel are they real or imagined suffering makes you lifelike lifelike, but not alive now pain only exists in the mind. It's always imagined. So what's the difference between my pain and yours between you and me?

Suzanne I did that ring a bell. It sounds very familiar with that came from it is absolutely thrilled. It was season 1 episode 8 between Bob for the Anthony Hopkins character and Jeffrey Wright the Bernard character because of your work. What did you find most interesting about Westworld end and the hosts the thing that I find most interesting about the entire show with it. It really asked this question of is we always think of AI in robots is being something that's going to let you know impinge on us society. And if so is his people and maybe diminish our Humanity. I think that show really asked more questions about humans than it did a bar technology. So I need to show really explored what it means to be human and to look up more of the Dockside of what humans might do to other

Beings if they're allowed to do whatever they want. So I think it's this is holding up a mirror to what it means to be human was really the premise of the whole show.

Did I am excited to welcome Suzanne gildert to the show Suzanne is the founder and CEO of a startup called Sanctuary AI the bills you would like robots gold shins or synthetic human complete with you would like bodies in mind prior to this. She was a founder and CEO of Kindred AI when she oversaw the design and Engineering of that company's you would like robots and was responsible for the development of cognitive architectures that allow them to learn about themselves and their environments as you can guess. You're a little role-playing exercise Suzanne and I are going to be talking about robots and AI powered soft work, but we'll focus on what it takes them to be part of a business process, you know one place we can go with these robots at some of the massive datasets being built thoughts about learning and training a odds.

Will probably cover some unexpected ground too, but I think it'll be all be relevant to your current business strategy thinking so stay tuned. It's going to be a very interesting show. I'm John pryal and welcome to the Georgian impact podcast.

So I'm just excited to have you here today. So welcome to van. It's great to be here with talking about this stuff. And I think what's interesting is in that same dialogue that I extract a little script from Anthony Hopkins what kind of argue or make the statement that a good? I actually need the backstory and he said, you know, every house needs a backstory Bernard, you know that this this kind of stuff of fiction for hosts and Youmans a light. So I'd love to get your sense of what it takes for a robot or or today. How much is required Beyond just a mess of datas at the train it where we going. I think that's a really hard question to unpack. The answer is a lot. I kind of in the car currently where I think a lot of techniques in the NAIA and especially in southern machine learning.

Are really a big part of the system, but they're not enough to get us two things that the act and think like people are even like animals and other creatures. I think first of all, there's a huge amount of innate structure that we have in our brains when we bought and that develops that kind of gives as a framework within which we can then Supply training a two to one thing's for sure, but I think that it's it's like we can't learn everything just trimmed a Serene and unsupervised way. That's kind of some strip to the that all goes into so we really don't know what that is an AI That's like some people are doing some interesting work in my area but it's it's it's like a whole big pot of AI that's missing and joining together these these ideas that came from like the good old fashion day. I like symbolic logic and reasoning and symbol manipulation joining that together with the new kind of wave of neural Nets actionist approaches and machine learning what pattern recognition

Worlds together is something something I think needs to happen in Ai, and we're not quite there yet, but it's looking promising people got ideas that whole area and then these things like characters. How do you give them stories? How do you give them personalities and and things like that? So for a lot of AI you don't need that writes just go to do a task. It's going to know things by the well, then it has to perform a task. But if you're trying to create human-like AI those things have to come across as being human like they have to have interests and little stories and memories and things experience. So can you like one of the things we're looking at is how can we do that? How can we give a eyes like a back story on memories of some kind of History to the meat that they didn't experience of things to parts to this backstory discussion in your first part of your answer was actually interesting to because she said the

AI needs a framework in my head going all that really is a backstory and it's not just mashing the data but it's match of the day 2 in context and we as humans naturally infer the context it it's always interesting cuz I'm cuz I'm although I'm a techie. I'm not the check your lips are impacting the sticky and I always have a little bit of a debate because you know the answers in the data and the data predicts and I'm taking what I think about the Netflix thing. It's not asking me things that I should get you be asking to me is why I watch that and is it just the data or is there more to John browse viewing habits that need to be sought about like a time of day but there must be other factors in there that I really do appreciate what you're going on the kind of the the real backstory as as these things develop personas. I think these two Back stories, they just both Back stories, but there are different time scale the Back stories. So the light wooden a lifetime backstory of experiences and memories of thing might have

Is one limit but then the underneath the AI side of the framework and how all these symbols fit together. You can think of a sativa Lucian Reebok story chance to fit into place all the states of the comes in so it's kind of like but the innate structures me having a brain that encode knowledge or a bit like our evolutionary memories, they like things that we experienced you many many many generations ago, but they've somehow, you know stayed in there and not so ready for us when we bought and it's ready to be populated ancestral memory test and it's genetics at its innate there's things that you know, what is a dog behave in a certain way. Why do puppies playing as they all play the same way? All these different breeds are all play the same way as yeah, it's just whether those memories were we got that over lifetime or over and evolutionary times go show in terms of making an AI effective. You're already looking even in round one the Here and Now

Carrabba's I'm going to talk to you later. About the the big future but even in here and now you want to see an effective a I kind of really focused on being more than just the data. This is that at where am I overstating? I think that's a good way of putting it. Yeah. Yeah, I think that you can learn a lot from data but what's missing is some sort of structure into which that can that can flow example, so you can train an AI to do anything but it doesn't take a human child a million examples to learn like, you know, the difference between a circle Block in a square block, right? They need like five examples of something. So there's something going on that isn't just unsupervised eat wedding. We obviously have some degrees that we already thought of Noah bar in Italy and then we just need a few examples to really understand that difference. This leads us down this broader path where companies can differentiate themselves on trust and it's

Clean what they're doing in many different ways with all this data and being transparent about what they're collecting what they saying about a show beyond the user interface if I can just generalize the question. What do you feel the obligation of company might be to explain that an AI is engaged and kind of working on behalf of an end-user whether it's visible or not. I don't think the point here is whether an AI is in the loop or not to me that that is less important than what are you trying to achieve with this interaction. So if is by knowing it's an AI then the person has some extra information that this same day is calling like a million people symbol taneously and asked him the same question. Maybe that's important to the person to know rather than it being on one-on-one human thing as it's not about it being and I I think it's about the consequences of what that means.

To the person on the other end of the call. So for example, if it's an AI then maybe the person is inferring but all my dates are is not going to be stored in some database. Same thing might happen with a human it be called being recorded So, I think it's more like the intent of what you're trying to do with the call rather than whether it's an AI or a human that you're talkin to this more perfect than that. The broad holistic corporate strategies notches AI or not. It's how I is the company of dealing with you. That's a human person. So in a way when you talk to someone from a company on the phone, you're not really talking to a person in that like, you know, individual human capacity you're talkin as a vehicle for that the company's message and you see this very strongly when you kind of try and bring us.

What wine in there a reading things out on a scream like a script to you? Like no, I just I just like tell you what my problem is. I have you tried this thing. And so I think yeah it which human human interactions as well just exit SF at the great a great compared to human interaction. So you mentioned that earlier I want to kind of get into this issue of the negative consequences of dealing with a eyes and so in my mind way, I'm thinking about this. It's always about training first. So you always interesting about edge cases. I don't know we got image processing and self-driving cars. You need to you need to fight against adversarial attacks it you know, my turn a stop sign into a yield sign right part of all this training happens. You need to make sure you're capturing the good side as well as the bad side. So you see this as more than just training

Cuz you're not going to stop Behaving Badly so you obviously wants your entities to deal with it in a certain way. So what's your view of how to work with the the the bad side of human nature? Cuz I don't think we would have and I have the ability to fix that to buy. It's a bad side of human nature. Do you mean like how humans react to no circumstances? Cuz it's that's a little bit confusing. It's actually very Broad and I probably what I should have sold in the back of my mind was your Microsoft put a out in the wild high and then all the bad human behavior turn 10K into into a racist. I see what you mean Westworld these people going there to just shoot robots and get their jollies off of it. I think that's less important and more important that humans are going to have influence that the date of your capturing from human interaction is going to affect this Ai and I'm thinking you got to catch this before you put

Yeah, okay. I know I never fully understand what you're asking and I think it's so this is one thing but we're thinking about a lot at Century. And in fact, it's one of the pieces we called. The company century is actually believe when they eyes at developing at least ones that you want to become human. Like I think you actually have to keep them a little bit isolated from the white as well. So we should we do the same thing with children like, you know, we don't let children go and watch horror movies in the cinema and they really young we know that that affect the developing brain and then I think the same thing is true with developing a I want to keep them a little bit cushion a little bit isolated from certain types of data mean that they'll never be able to see that day to put it means that you go to be selective about the order in which you show them things. You have to be selective about her you curate the first instances of day to the thing CD.

Those can shape the mind in a different way. So that today is a really good example. So imagine if I had been initially trained on a bunch of non-white roll dated to begin with so they've been trained on I don't know say conversations between academics or something like that has its first state to sign and then the model in some of the weights from that date to set and then maybe it had been exposed to a little bit of the like crawling into that stuff. So they may not have turned out the same way as it did if it had been exposed to things in a different order. So I think you really do have to think about what you show a eyes and robots in what order and I think it should follow some kind of childhood development cycle way. You did mention that at first they are exceedingly naive they going to make stupid decisions. They going to do things wrong. So what you have to do is correspondingly show them.

Fun and easy and allow them to develop under those circumstances. And so that they can bend be exposed to some of them are like nasty and brutish horrible things that we have in our world a bit later on when they more easily into it is so cool because that really is the thinking framework that you mentioned earlier. We first started this in terms of your actually by giving it to the The evolutionary dataset you're creating its framework. So it's it's a toddler. It's learning out of communicated with toddlers in a house or an addict is in his learning how to deal with trolls online chat Forum to like practice, right? It is kind of seems obvious to me. That way I is in the same situation, there's a peace man of training and I never thought this is actually really interested. I don't think I've ever thought about training data being evolution.

In terms of getting blood Tobias out. So let me ask how you might do testing this. There's two elements is a data science elements and and and a machine-learning on my building the data set and then just that the strategy in the training and some I can take this back to my old timey days when I was quoting a zillion years ago and it was old waterfall into your development Cycles if they were designed genes & development teams, and then testing things and detests from my mind as a programmer they were adversarial to me. So do we need to think about as you test your AI even having the adversarial testers or does it get covered in the data side of things?

Yeah, I think you need something cover that I think what you should do is probably in the same way that we we we test people as we're educating them you you want to give them enough material that you know, they can handle so they got like a reasonable grade and so they don't feel like that completely failing at everything but you also want to give them something new that they've never seen before that might be some kind of adversarial example that trying to figure out so I think with when you're training a I always want to be on this thing the edge of knowledge, so you're always giving it something a little harder than what it has been able to deal with in the past but not so hard that its current Concepts is so far away than what it needs to know that you won't be able to do. So. Yeah, I do think we want to give Things episode realized. I'm pulling the one of the problems with the adversarial things that you see in like deep networks. Is it the network structure is actually not designed.

Do you handle things by that all to be able to understand the way around it? So it kind of comes back to this symbol framework. So if you have adversarial examples, but kind of seemed really stupid to humans what you change to pixel when it thinks it's a kangaroo or whatever those things show that the something lacking from your learning framework. That's like okay. It's obviously not putting image into the right bucket to begin with cuz if it was a couple of pixels changing would never move it over to you an entirely different. So I think you do need a structure on top of you saying telling it what to roughly expect from the day sir. And then it just just spilled in those those Concepts in our audience is often the CEOs and see Suites in and not your you're not the one millionth your depth of knowledge here, but I should listen to some things and you actually did do a contrast and you ended up with this human Centric occlusion.

I'd love that sounds like I'd love to just hear you just talk another minute or so on the differences in the needs for both the Deep learning and neural networks vs. Reinforcement learning.

Right. So, okay. So does this various different fields of AI historically and like neural Nets and passing the recognition that has been one big area of which teeth whitening is kind of now so sad and reinforcement learning is classically didn't saw a separate but together and I actually think that you do need all these different parts of a eyes come and feed into one large model to solve everything. So I don't have like a scientist is like an elephant or something and they can't see and they reach like feeling a different pop the elephant. So I'm like feeling ear and there's a slight feeling of wagon. They like describing completely different things, but they're all some facet of 1 Lodge St. I kind of feel a bit like that especially human-like cognitive architecture is currently is what we have all these pieces. We haven't yet found a way to make them talk.

Do something like reinforcement learning is absolutely critical when you have a human-like AI or even an animal like a IR a robot because it has to be able to know the difference between what's good and what's bad in all its choose an action. So I think anytime you have a system that has to choose an action then goes on to affect its own State and the state the environment you have to have some kind of reinforcement learning Loop in the that specifies a reward Best Buy's what's good? And what's bad so that the thing a robot all the AI system knows how to keep actions in the future that the reason you haven't seen that in the morning on that approaches that we just been using those type of network two different things have been using them to identify.

Things like objects and classify things they recommend things and it's kind of like it is like taking an action but usually the action space is very very small is only like, you know, you can eat a recommend something or not. That's not the same as when you have a robot on the committee many many many different things that could do just too close and probably getting ready for this stuff. I'd love to just get your clothes on maybe the ultimate dealing with Humanity a little bit. Maybe what defines a black hat but I was watching some darling videos with these people were torturing these little dinosaur robots, and I'm still was always in morning when it hit the newspaper long time ago for hitchbot, which was I just started with you and Vancouver work this way all the way. I used to Cross Canada in Philadelphia got it got crushed, but this robot all by itself. It's across the country just picked it up and we're kind so we had probably eighty 90% kind people into percent.

Evil people what's your sense of what's a good to take to kind of get people to embrace the new world if it's if you think that's a fair question or if you think it's a different question, you can answer that. Yes. So I think it's going to be a huge amount of disruption next couple of decades. I think there's going to be increasing automation. So you're even with Rai just pure Industrial Automation. I think he's going to change a lot of the the job structure in the social economics that we see and then when you add AI into the mix that just makes it even worse is that whole issue and then that's the issue if we're going to be creating ai's that are more and more mine's like more and more like what we would recognize as being intelligent some thoughts and even Consciousness and things like this is going to be really challenging people to open up bed definition of what it means for a thing to be alive to have feelings and to have Consciousness. So it's absolutely clear.

AI system such as the ones we have in our robots are already capable of feeling things like Pleasure and Pain because very way we just fine things like the reward and no reinforcement learning system is like something that is damaging to the robot's body is interpreted as a pain signal and then that is necessary. It's not like we put these things in because they're interesting of fun. Now we think humans are actually necessary for the algorithm to the robots already have a sense of Pleasure and Pain and I think it's going to take too much imagination to extrapolate that I'm having like all the kind of spectrum of feelings but humans do in service of Chino try to maximize their reward function. So I think people are going to have to figure out how to deal with this and what's going to happen is you'll get a bunch of people who answer from all sides of robots and think that cool and all you know,

All white people and it's because I'm talking to it and I'm empathetic than I can imagine this robot having these feelings. So, you know, I want to help our protected thing. It's still have another group of people that are like, nope. It's not it's nothing like a human. It's a machine. It doesn't feel as nothing going on inside it.

Wow, I knew I get to a cool vision statement near the end of this which is which was perfect and was fascinating to me is to do training, right and everyone knows reinforcement learning as a as a way one of the training message about Pleasure and Pain in their what were your answer you are answering more precise and reinforcement learning and talk about Pleasure and Pain and you quickly got yourself to this very interesting theoretical question about AI writes. This is great. So I guess I'll just going to ask you if your prediction where do you think you can give us a sense of where will be in 20 years with us predictions are always likes I think within 20 years we going to have solved some of these integration problems. I was talking about it like hard to reconcile the like neural-net perception type of proteins with some of the more symbolic II and I think that's going to need things that a Stein to demonstrate.

Mind like properties that are like human so I don't think that you know, we cannot absolutely definitely have something that's indistinguishable from a human on that timescale. But I think we're going to have something that's going to stop triggering questions for large number of people. I mean something's already trigger questions to people like some of the robots you see on TV and some people are already asking like, you know, what does this mean nothing we going to see that more and more and more the more that Ai and robots get towards being more like us so I think we in 20 as we going to have a say probably almost all of the populations talking about this at least it in countries where this technology is is like highly visible. Wow, this is great. Well, I don't know I think we should probably both keep our day jobs. I'm not sure we going to be getting any calls or many agents on our great acting skills of the beginning of this podcast, but

Great discussion. I really enjoyed this is that thank you so much for being with us today. It was a pleasure.

DESCRIPTION

Are we headed for a real-life version of Westworld? In this episode of the Impact Podcast, Jon Prial welcomes Suzanne Gildert, the Founder and CEO of Sanctuary AI, a startup that builds human-like robots called synths, or synthetic humans, complete with human-like bodies and minds. Together, they discuss robots and AI-powered software, and what it will take for them to become a part of business processes. They also cover learning and training AIs. 

You’ll hear about:

  • What it takes to make an effective AI
  • The right way to train an AI
  • The rise of AI ethics and AI rights

Who is Suzanne Gildert?

Suzanne is founder and CEO of Sanctuary AI, a company with a mission to create ultra human-like robots — “synths” — that are indistinguishable from us physically, cognitively and emotionally. Sanctuary is structured to explore both cutting edge technology and the ethical issues that arise from creating human-like machines. The company strives to create a micro-society in which synths can develop and be granted a safe haven as they transition into full acceptance by our wider society.

Prior to Sanctuary, Suzanne founded Kindred Inc., an artificial intelligence and robotics company. She hand built over 30 robots to demonstrate Kindred’s core technology concept of human-robot teleoperation for reinforcement learning. She grew the company to over 50 employees and opened offices in Vancouver, Toronto, and San Mateo. She helped raise over $50 million in venture funding for Kindred from top tier investors including Eclipse, Google Ventures, First Round Capital and Data Collective.