AI applications for OSINT in defence

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, AI applications for OSINT in defence. The summary for this episode is: <p>&nbsp;In this podcast Harry and Sean are joined by Dr Ingvild Bode to look at the application and challenges of AI use in weapons systems.</p><p>&nbsp;</p><p>Dr Ingvild Bode has spent the last year researching this subject for her most recent policy report, <a href="https://findresearcher.sdu.dk/ws/portalfiles/portal/231643063/Loitering_Munitions_Unpredictability_WEB.pdf" rel="noopener noreferrer" target="_blank"><em>Loitering Munitions and Unpredictability: Autonomy in Weapon Systems and Challenges to Human Control</em></a>, co-authored with Dr Tom Watts. During the podcast Harry, Sean and Dr Ingvild explore how AI is being used today to supplement or delegate not only motor skills but also cognitive skills. They also explore how AI plays a role in how decisions are made about specific aspects of the targeting process.</p>

Speaker 1: Welcome to the World of Intelligence, a podcast for you to discover the latest analysis of global military and security trends within the open source defense intelligence community. Now onto the episode with your host, Harry Kemsley.

Harry Kemsley: Hello and welcome to this edition of World of Intelligence at Janes, with your host as usual, Harry Kemsley. And as always, my co- host, Sean Corbett. Hello Sean.

Sean Corbett: Hi Harry.

Harry Kemsley: Good to see you. So Sean, one key theme that has I think been just about the most consistent contender through many of our previous episodes of the podcast has been about the relevance and use of artificial intelligence, AI, in the national security arena. But while we've touched on the subject quite frequently, you will remember of course last year with Keith Dear from Fujitsu, we had a great general introduction. It also came up, as I recall with Di, Di Cooke when we were looking at missed and disinformation and how AI is used in the creation of and the detection of deepfakes. We haven't looked at many specific applications and challenges of AI use. So while it's not completely solely focused on open source, I thought it'd be great to take forward the idea of looking at AI use in national security scenarios, specifically how we can use AI in weapons systems. It's one of those things that people always go to. You mentioned AI in national security, the first thing they talk about is weapons use. So I'm absolutely delighted to be able to introduce our guest today, Ingvild Bode, who has spent the recent year studying exactly that. Hello and welcome, Ingvild.

Ingvild Bode: Hello. Thank you so much for inviting me.

Harry Kemsley: It's a pleasure really. Dr. Ingvild Bode is associate professor at the Center for War studies, the University of Southern Denmark. She's the principal investigator of the European Research Council funded Project Autonorms, which investigates how practices related to autonomous and AI technologies and weapons systems change international norms. Ingvild also serves as the co- chair of the Institute of Electrical and Electronics engineers or IEEE research group on issues of AI and autonomy in defense systems. Her work has been widely published in the form of journal articles, books with leading publishers and policy reports. Her most recent policy report, Loitering Munitions and Unpredictability Autonomy in Weapons Systems and Challenges to Human Control co- authored with Dr. Tom Watts offers a detailed examination of one particular type of weapon system integrating autonomous and AI technologies in targeting. Clearly very relevant to today's topic for discussion. Ingvild, so with that background and experience you've got, I think probably it's worth for us or for the listener to start with an understanding of what you understand to be, quote, " the weapon system" and the targeting processes in it. So maybe we'll start there. Let's have a look at what that means to you and we'll go from there.

Ingvild Bode: Yes, thanks for that. So I think in terms of the weapon system, what we're talking about is not just a single weapon, but a weapon as a component of a wider system that also integrates control and operation elements. So I think this is the basic distinction that I would like to make. And so in the context of targeting, I think also we quite frequently associate targeting maybe specifically with use of force decision at the end. So the point of attack, a different form of kinetic action. And this is of course a very decisive phase but only represents the tail end of the targeting process in military terms, so what militaries undertake. And I think you'll know much more about the military components of this process, but I think from an academic viewpoint we typically distinguish between the strategic and a tactical level of targeting. So we can think about anything from development or validation of targets that takes place before even of course, particular weapon systems are considered. And then I think in the context of AI and weapon systems, the focus is on the tactical level of targeting, which includes such steps as identifying targets, tracking them, prioritizing them, and also selecting them. And then at the very end also the engaging of targets. So I think we should think about targeting as including this entire process of preparing a target and also of course the tail end of using force.

Harry Kemsley: Yeah, perfect. So Sean, you, I know have been involved in targeting in many different arenas and different war zones as well. And as Ingvild just said, the targeting process, the weapons use is actually an end to end process preparation, execution and indeed post use of weapon to see what the effect has been. How would you characterize from what you said before, Sean, how AI has helped us generally speaking in the targeting process? What are the sort of things we've picked up on in the past? We've talked about for example, scaling, data mining, but what else have we seen in the AI use in that weapon system spectrum?

Sean Corbett: So if you look at the wider, although certainly the way the wider interpretation I use of targeting, traditionally people think it's dropping bombs on things, objects and people. But actually targeting is all about applying a capability that has an effect on an adversary to change their behavior. So it is a lot wider than just that, albeit the sharp end is clearly still really important in terms of doing that. So exactly as Ingvild said, it is a process that it starts really early on in terms of your development of your, what I would call a target systems analysis. So what is the effect we're trying to have and what are the components of the thing we're trying to impact that would be more effective to actually target if you like? A great example of that would be, in fact right now what's going on right now with the Houthis? Yes, of course the launchers at the sharp end, but really they're almost incidental. What you really should be talking, and I'm assuming they are, is the weapon stocks, the radars that actually do the location of the equipment, the command and control, all that sort of stuff. So the AI piece comes in by understanding the complexity in my view anyway, understand the complexity of what it is that we can actually impact that has the effect on the output.

Harry Kemsley: So one of the things that we've picked up on previously Ingvild with artificial intelligence is its ability to do things that humans can't do as well. So dealing with the scale, as I've mentioned, the ability to resolve targets in what otherwise might be a very noisy picture. What are the kind of uses that you've seen and studied in the use of AI in weapons and in the targeting process?

Ingvild Bode: Yes, I think that we can think about AI being used both to supplement or to delegate motor skills and also cognitive skills. And I think that distinction is quite an interesting one because maybe earlier on some AI technologies also predecessor technologies such as autonomy automation have also been used to sustain mobility functions in weapon systems such as navigation, takeoff and landing these kinds of issues. And this is also still an area where AI is used. So these are really important skills, but I think what most people are interested in now and what you also refer to in your question, Harry, is this delegation of cognitive skills. So where really AI plays a role in how decisions are made about specific aspects of the targeting process. And I think here it's really about militaries really associated AI technologies with a speedier processing of information. So it's about speed and also maybe at the scale at which information can be processed by AI technologies that is not comparable to human style of processing information. But I really feel that this emphasis on speed is really what triggers the use of AI. So also this kind of hope that's through, because I guess it's always about speeding up your decision making process, vis-a-vis what your might be doing. And I think in connection to that, there's also the idea that some especially western militaries are putting forward that they're arguing that using AI in the targeting process also has humanitarian benefits. So basically saying that such technologies could also potentially reduce the risk of harm to civilians or civilian objects. I have to say that I think that is a very valuable claim. I think at the moment it remains mostly a claim rather than something that is tested.

Harry Kemsley: I definitely like to talk about that because that's counterintuitive, the very first thing we tend to think when we think about AI use in weapons is that it will be cold- hearted, ruthless use of weapons, not actually what you are suggesting what has been claimed that it could actually be used to reduce the harm on collateral or otherwise known civilian populations and buildings. Sean, just want to come back to you on that point there about scale and speed because of course decision support isn't just about how quickly you can support, but how well you can support of course. And perhaps we'll come back to in a second about the quality of the support that would be provided from an AI- based system vice an alternative that'd be less advanced. But Sean, you've been in situations I know many times where information has been maybe late, maybe not as good as it should have been. So quality and time, these are quite interesting concepts, aren't they for us to consider in the weapons system and the targeting process, Sean?

Sean Corbett: Yeah, I couldn't agree more with Ingvild about the speed element. I mean the way that we do targeting in the west is laudable in many respects because we are very, very careful in what we do, but it is incredibly laborious and it takes a lot of time. Now the concern with that, and I was just listening to some radio earlier actually, where we forecast what we're going to do and then by the time we actually do it, the bad guys have decided, " All right, we're about to be targeting." And hidden all this stuff away. So that latency is really, really important. So AI for me can actually exactly as Ingvild said, speed up the process but speed up in a very responsible way. So for instance, one of the things that takes the time, certainly at the operational level anyway, is doing collateral damage assessment before you actually put any effect on a target. And that could be anything from, well, if you take out a generator of a hospital, you might hurt people too, is this blast going to hurt civilian population, et cetera, et cetera. Now there's a lot of software in there that takes a lot of time to do and certainly with the human in the loop you could certainly automate that a lot more effectively and make it incredibly quickly to go through that process. Equally, the discrimination of the targets, as I mentioned before, if you can apply the right algorithms in advance that say, " Right, those are the key components, off you go." Having got that sort of scale of data that says, " Yes, but the collateral damage is here and don't do that." If you can automate that, you get so much inside and I can't believe I'm going to say this, the OODA loop of the enemy back to the old good old military doctrine. Then you've got far more chance of having the impact and the effect you really want.

Harry Kemsley: Let me just come back then to this point of quality, Ingvild, that we're discussing. Have you got any examples of where AI has managed to perhaps resolve the target or avoid a targeting process that was going to cause unnecessary or any undue damage or unnecessary effects? Have you seen any evidence of that at all in your work?

Ingvild Bode: I have to say no, because first of all it has to be said that for example, we're talking about target engagement for example, right? So many of the weapon systems that we see use by militaries, so they can operate according to different modes. That means they give the human in the loop more or less of a role to play. So sometimes the human body has to authorize each single target. In some cases, for example, the human might be given a time- restricted veto, right? So we know that there are these different modes, but we don't know in which mode the systems are used. And as an observer, based on the public information available, we cannot make that distinction, right? I think in terms of the scale of use, if we look at, for example, the current use of AI- generated targets in Gaza, so I know this is a very tricky subject, but it's something that we know something more about because there has been reporting about that, right?. So we know that the vast majority of the targets that are attacked by Israel in the Gaza Strip are AI generated. So at least this is what the reports tell us. And also individual sources claim. I mean there's always, you guys work a lot on OSINT, so there's always this question, which sources can we trust? Of course we can try to triangulate sources as much as possible, but there's I think in general, as somebody who doesn't have privileged access and relying on OSINT entirely, so there's always a limit to how far you can verify that. So we have that and at the same time we have quite large scale of destruction of civilian targets in Gaza and also a large death toll of civilians, quite frankly. And that puts me in a certain position of disquiet, vis-a- vis this claim. So there's also voices saying that, " Okay, there's a temptation maybe to use AI in populated areas because of AI weapons." Because there is this assumption that maybe they can be more precise so that they might also actually increase the use in some contexts of these kind of weapons systems in populated areas, which is what we don't want.

Harry Kemsley: Let's just touch a bit further on that, then in terms of the limitations and challenges that are implied by that case study. The application of, as you say, presumes, assumes that there will be greater accuracy, faster response times. But is there actually a limitation and challenge that we need to address in terms of AI and weapon systems, Ingvild?

Ingvild Bode: Yes, I think there are many practical technological challenges that relate to AI systems in general. So they're not specific to AI systems in the military domain, but sometimes that we also encounter in using other AI systems. So one of these is for example, that no AI system can ever be tested for every single use case. So in the testing period, you can never be sure yet you can test it for everything the system might encounter when used. So that means there will always be failures and malfunctions that system has not been tested for. And I think in some cases it's also a question of sometimes some of these weaknesses of the system can come out in the testing and evaluation process. I think if that is the case, of course it's pivotal that these weaknesses become known right to whoever is using the system.

Harry Kemsley: Yes.

Ingvild Bode: I think then there's the whole kind of training data issue. So I'm sure you're familiar with the garbage in garbage out model, so that the systems are only ever as good as the training data that they receive. If the training data is of poor quality or inappropriate for the context of use, then that also will hinder the performance of the system. Then I think what is also as we see more discussion around various types of AI in weapon systems interacting with each other. So there's also that as a potential to consider, not just in terms of how... even if you just think about one site using AI in weapon systems, right? So if you think about all of these different weapon systems that might be used on the battlefield, integrating AI and then interacting with each other because then they have to be coordinated and interoperable. So this also again can increase... it's a potential source of unanticipated outcomes, result.

Harry Kemsley: I'm going to step around the debate about what incentive the corporate world has to sort these problems out when in fact they can probably sell them on the basis of claims of all kinds of incredible capabilities of AI. And I'm sure they will tell me that they are thinking about the integration of AI systems along with other weapon systems and so on to make the battlefield entirely seamless. I say that with smile on my face because I've heard that claim many, many times. Sean, let me bring the conversation back to you though, in terms of this dealing with the unexpected, the untrained. I suppose when you think about your experience, no matter how many times you've been in a classroom to be taught about targeting, I doubt you ever saw a scenario in the real world that was exactly in accordance with the doctrine. It was almost always off script.

Sean Corbett: Yeah, absolutely. And I like the way that Ingvild it, this is where the cognitive piece comes in if you like. You can have as much training data as you like, you can go through scenarios, but when you're talking about potentially lives at risk, there is that ethical piece if you like. But it's more than ethical, it's actually understanding the second and third order consequences of what you're doing. There was one instance for instance, some time ago now where we were trying to degrade the communications of a certain nation and I wanted to leave some of these communications up because we were getting some really, really good feed from those communications. But the powers that be at that time decided, "No, no, no, it is far better to be seen to be blowing stuff up and stopping people communicate." And so I was overruled and that's right, we're doing that anyway and lo and behold, we lost so much of our intelligence for the next day. Funny old thing. So in those environments you've got to have a... and I don't know how advanced the AI is getting in terms of the cognitive piece to say, " Okay, well what are the second and third order impacts rather than just having an impact on that specific target." And then that leads to all sorts of discussions about how you prioritize, how you discriminate between different targets, particularly if you've got a multiple targets and only so much effect you can have on them. And then you do start getting into some quite difficult cognitive and ethical issues. So it is a really complex area and I don't think, and Ingvild may well correct me here where the AI is an estate yet that it can actually do the true autonomy and come out necessarily with a better outcome that we would with the human in the loop. And of course someone's got to be accountable as well.

Harry Kemsley: Yeah, I'm going to bring a conversation round to this ethical part of this conversation in terms of the human control in the loop of the weapon system, the targeting process. Before I do though, let me just go one more time around this conversation about the efficacy of AI in weapon systems because I think what I've heard, and I just want to try and summarize the point. But I think what I've heard, and I'm going to ask you to confirm or deny, but summarizing it, Ingvild, is that AI is still good at the things that it can do better than humans in terms of quantity of data, analysis, presenting options, finding targets perhaps. Or even potentially, although not proven by experience, could help us avoid unnecessary action. But it seems to me there are still a lot that has to have a human in the loop, a lot of aspects of the overall weapon system targeting process that has to have human engagement. Is that a fair summary?

Ingvild Bode: Yes. I mean I think there are both legal and ethical arguments for the human in the loop. So I mean legally, many legal experts argue that you cannot really use AI technologies and weapons systems in compliance with, for example, international humanitarian law without a human in the loop. Because applying this type of law, as you will know, really depends on the particular context. So it's very context dependent. So it requires this kind of case- by- case deliberative human judgment such as the distinction between who's the combatant and who's the civilian. I mean I think there's a 200- page manual that the ICRC published at some point on how to distinguish that. But it seems to me also in conversations I had with former military personnel that it's very much a judgment decision that you would have to make. So I think this is really where I see human judgment being pivotal. And also I think there's the broader ethical question. So one is the legal point and one of the broader ethical point to what extent we want to delegate away decision making on the use of force to AI technologies. And I think that's more fundamental. So there are lots of issues here around, for example, moral agency, right? So do we lose moral agency when we do that because their atheist arguing I think very forcefully that machines or systems cannot exercise moral agency. And for example, the fact that although of course I know this is... I don't want to say this is the ideal typical soldier always does that, but I mean human soldiers have the capacity to flexibly, adaptably engage in decision making. And also exercise potential restraint and exercise these judgments. The idea with an AI technology of course is that it would behave in the same way under the same circumstances all the time. So there's no room for this kind of context dependent movement. So I think that's why I also think that it's quite notable that although many states, I mean I've been following these kind of debates about how to potentially regulate or govern the use of AI in the military domain. And many states also agree on this principle of safeguarding human agency control or judgment in warfare in some shape or form. So I mean that disagreements about what precisely the quality of that should be, but I think the principle is very much supported by many voices.

Harry Kemsley: Sean, I'll come to you in just a second. I want to just go one step further on this moral agency versus military efficacy because I think it's fair to say, and I'll give a quick war story to the point you just made a second ago Ingvild, from my own experience. But Sean, after I've done that, I'll come back to you in terms of this moral agency versus military efficacy. How worried are we that we may end up fighting other organizations that are less concerned about moral agency and does that give them a military advantage? The example I was going to use is a first- hand experience I had in another part of the world where on a patrol. A young man with a couple of his friends, there were probably teenagers ran towards the patrol group that I was with carrying what was a metal toy gun that looked astonishingly similar to what would otherwise have been a weapon in the hands of a enemy combatant. And the soldiers close to me obviously noticed this young man running towards them, but there was a definite and very clear decision made very quickly that this young man was being an idiot running towards the soldiers with a toy gun and they didn't shoot him. Actually, if you read the rules of engagement, that would have been considered a hostile act. Now whether that was the decision to shoot or not is another conversation entirely. The fact is they've made the judgment in that moment to not do so. I think an AI system with radar detecting metal and shapes could well have decided that that was a hostile act and actually reacted differently to your point earlier. So judgment, yeah, fascinating. Sean, pivoting across to you the impossible question. So moral agency versus military advantage in a thousand words or let's, go.

Sean Corbett: I'm going to kind of go around that from the example you made and then we'll pivot to it though because I was going to play devil's advocate that said that actually there is a counter argument playing devil's advocate slightly in terms of I have seen an event where, without going into the detail, it looked on the gun camera, looked like there was a military vehicle. It really looked like it was a military vehicle and that was activated on and it turned out it was a tractor. Now you could argue that AI now has developed enough that would've discriminated the tractor and gone, " Don't take this out." So there is an argument you said that, and this is where the AI comes in and it does get to the crux of the matter in a second. There's an argument where he said that actually if you get the right algorithms and they're properly train all the rest of it, they can assimilate so much more data so much more quickly than the human that it's going to be more right. Now what I don't know, and this gets to your question is that what is enough? How do you work out how autonomous you should be? Because my view, and you've heard me say this 20 times, is that at the end of the day, war is a bad business and you've got to have somebody that's taking responsibility for it. Whether that is for genuine, " Okay, we've done the checks and balances, we've got it wrong this time, bad things happen in war. That's its nature." Of course it is. But you have to accept that because somebody has to take that both has the authority but also take that responsibility. Now that should be collective responsibility. If somebody has done things for the best reasons, they've been through the checks and balances, they've ticked all the boxes and then they've made that cognitive system, " No, I think this is right," and it turns out not to be, then that's something that you have to accept. If you are looking at straight through AI, and we might get onto this at the end because almost my strap line actually where everything is totally autonomous. Then how can you go through about, well, what happened there? Why did it go wrong or indeed, did it go wrong or has something happened in terms of the effect that we wanted that we haven't understood what the effect is because we haven't been able to. And this is quite a regular thing, do the battle damage assessment that says, actually we had this impact. And it may be much later that we realize, " Oh, that's why the AI did that." Now we're getting into quite a difficult world there.

Harry Kemsley: Yeah. And Ingvild, how do we bridge this gulf then? We've got this belief that AI can help. There are things that it could or should be able to do better than the human. The example Sean just gave there with the tractor is an example of that potentially. And on the flip side, the moral agency part about how do we ensure. Where do we go with this? How do we get to a place where we can more, in quotes, " safely use AI"? What are the necessary steps you've got to go through?

Ingvild Bode: I mean, I think there are various potential steps. One is for example, to consider, " Okay, what type of target profiles we might think AI could be better at identifying than others." So for example, there are people saying that we should potentially not use AI for anti- personal target profiles, right? Because we know from our experience with using AI technologies that they're not very good at also making these kinds of judgment decisions between civil combatant. But if you're talking about for example, military objectives by nature such as tanks, so I think the reasoning could be much more straightforward. And also this is where we see the performance of AI technology being quite high in object recognition. And if you know the shape, of recognizing the shape of a particular ad defense system, for example, it's quite clear that this could only be a military object, right? And I think then it's also, I've seen the conversation moving quite a lot from, I think initially there was this idea that AI would completely replace humans, right? Also, this fear often also connected to this idea of killer robots or humanoid killing machines. That also was surprising, it is still a sticky image somehow. And I think now the terminology that many militaries seem to be using is this idea of human machine teaming. So that somehow, so there has to be a part for both in a sense. And then it becomes I think also a question of how to organize the situation of human machine interaction that will become a much more frequent part of house soldiers do their jobs in a sense. So then I think it comes down to looking really at how humans interact with AI technologies. What are the potential pitfalls of that? So for example, automation bias, something that we also know from our own interaction even with just day- to- day technologies, so that you tend to somehow trust the output provided by AI technology more than may be your own critical deliberative reasoning skills. I think it will be a lot also, this conversation, about how to we ensure that a human agency is not degraded by this completely, but that we retain this human agency in interacting with AI technologies.

Harry Kemsley: I think that idea of human machine teaming, Sean, we've talked about in the past, haven't we, with regard to just the raw intelligence process of getting the machine to do things that are dull and very, very time- consuming. Scanning billions of websites and blogs and so on to find relevant information for example, that's something we've certainly talked about in the past. Ingvild, I'm going to come back to you as we now start to evaporate time, sadly, as always, we run out of time well before we run out of things to talk about. What do you see as being the future trends with the AI in-weapon systems? We've talked about this human machine teaming. You've mentioned the idea that some things are clearly much more suitable for AI systems than others, but where do you see the trends going with AI in- weapon systems?

Ingvild Bode: Yeah, I mean I think the trend really goes towards this broader understanding of the targeting process and moving away maybe from only thinking about the use of AI technologies at the tail end of the process to everything in the big kind of area of decision support. So I think even just looking at some of the ways that states have been talking about this publicly, right? So the focus is not anymore on AI in- weapon systems systems, but AI in the military domain. And that can mean a whole number of things. I'm just picking also logistics for example. So I'm just focusing on decision support. I think this is where we've seen also a lot of the development on the technology side. So the spread of large language models such as ChatGPT and how they're being used to filter data, select information, aggregate information, identify patterns, and then provide these actionable outputs that militaries are looking for. And I think if we look at the news reporting about AI in the military domain, this is where most of it is. So for example, in the context of the Russian invasion of Ukraine, you had a lot of reports about the tech company Palantir's provision of AI- based software, right? I mean, I think the CEO has even claimed that his company's software is responsible for most of the targeting done in Ukraine. Of course we can verify this, but we can see that other companies have taken notice, right? So just last week there were news reports about OpenAI deleting language prohibiting the use of its technology for military purposes from its usage policy. So I think we will see a lot of movement in terms of maybe other also tech companies trying to get into that particular area of providing AI systems for decision support or especially also I think quite small companies that have very specialized in that. I think that's quite interesting also from considering a potential regulation or governance also from a state side, because we are moving away. I mean we are already away from a terrain where it's mostly kind of established defense contractors developing these technologies. So we were already always far away from that in the context of AI and weapon systems in a sense. Because innovation happens among these big tech companies who are also all involved in defense contracts. But now to add to the mix, we have all of these kind of little defense AI startups in a sense that also seek to get access to that or seek contracts

Harry Kemsley: And let's hope they're all incentivized, not just by the dollar signs in front of them, but also by doing the right thing. We'll come back to that in another podcast. So Ingvild, what are some of the main trends you've seen in use of AI weapon systems through your studies?

Ingvild Bode: I mean, I think it's worth noting so that you see AI and weapon systems across all the different domains, but I think in some domains it's more prominent than in others. So I think especially in the AI domain, I think we saw a lot of recent users. I think many readers, sorry, many listeners will be familiar with large munitions, also called one- way attack drones, the different terms for that particular type of technology. But fairly small systems that are integrating forms of autonomy and AI into target detection and also attack. And they can loiter over a particular geographical area trying to identify targets. And then once they identify the targets, they launch itself into them and destroy on impact. And I think it's quite an interesting system in a sense because especially in Ukraine, we've seen quite a lot of these systems in use on both sides, both for example on the Ukrainian side supplied by the US, also by other allies, also Australia for example. But Ukraine also has developed its own homegrown systems in this kind of area. And I think it's quite an interesting system or set of systems to examine because it gives us an insight onto how this has developed. Because initially these systems were only used to attack radars, so a very particular type of target, and now they can recognize various types of military objects, but also personnel. So they also use then against a wider range of target profiles. And actually the role of human control at the specific point of using these systems is also a bit uncertain. Because on the one hand, most of the manufacturers of these systems can very clearly state there is a human in the loop, but then the systems also seem to be to have this latent capability of functioning completely autonomously. And some Ukrainian military commanders have also claimed that they've used of such systems autonomously.

Harry Kemsley: So Ingvild, in a moment, I'm going to come back to you to ask the question I always ask at the end of podcasts about what's the one thing you'd like the audience to take away. But before I do that, Sean, I'm just going to throw a small nuclear device at you in terms of the question that is impossible to answer. How do you feel about the idea of AI getting involved in your decision support? You've been a decision maker, you've had support better or worse, and here I'm standing in front of you with a bunch of boxes marked AI and I'm going to provide you the answers. How are you feeling about that?

Sean Corbett: I guess the answer on that is it depends-

Harry Kemsley: Good answer.

Sean Corbett: ...by experience. Because I would be very comfortable if I am confident that the algorithms are in there and correct if I understand the environment which I'm in. But I'd still want to run a check and balance over it. You have to admit what were trends? Well, the trends are, again, just to back up what Ingvild said, but looking at the tactical level. One of the future advantages, in fact we're seeing it now actually is more autonomy on the battlefield. If you've got a drone that can automatically identify, that's a T72. T72 equals bad guy equals drop something on it. Now there's issues with that just in terms of loiter time and all the rest of it. And that's a great scenario and could be used. But what happens if your own organization may have T72s as well? So this is where the human has to stay in the loop because they're like, yeah, these are Russian T72s, but we've just captured three of those and these are in the wrong place. Now how do you get the confidence in the AI to say, " Yeah, don't worry, it will discriminate that and decide not to." It's easier if you've got Western versus Russian equipment, all the rest of it, but even then it gets slightly uncomfortable. Now, my lack of comfort really because I would use AI if it helps me speed up the process, if it delivers the effects I want to in the time I want to because so much the targeting I've done. " Yep, we hit the target. Yeah, it blew it up or did whatever, but did we have the overall effect back to my effects- based targeting?" No, it didn't at all for lots of different reasons. So if it will help that, but my discomfort is that if we get too autonomous, when does it stop that we just go, " Right, we let the killer robots or just do war." Now, does that make war more likely or less likely? Because you take that peril out of the equation, and that's where you touched on it right at the start is that we have a very ethical approach to warfare. Other adversaries don't necessarily do that. So if we're putting in checks and balances into the algorithms and letting the AI do, but don't do that and the bad guys are, then we will lose because they will have an advantage straight away and then we can adapt. If we are at a war of existential threat to our nations or NATO, whatever, then I suspect that calculus would change. But if we've already got the autonomous systems with the algorithms that we've put in there, they're going around, " No, I'm not doing that." " But if you don't do that, we're all going to die," type thing. I'm very excited for effecting. So you can tie yourselves in knots pretty quickly.

Harry Kemsley: Well, I do remember that, reading a study in the US Air Force where they had top gun pilots and simulators fighting against AI adversaries. And eventually the AI was winning because it had no fear of death, it was flying its F16 directly at the human pilot in their F16, who was clearly at the last minute bailing out of the head to head, which the AI wasn't because it knew the human would. But anyway, we get into the realms of speculation very quickly. Ingvild, what do you want the audience to take away as your one final thought? Sean, I'm going to come to you, just a second.

Ingvild Bode: Yes. I think I would like to close with saying that first of all, AI and weapons system's already here. It's not a science fiction flick thing anymore. So we see its use in various conflicts and that it's increasing. And I think the second part is that what we also see is not a replacement of AI technologies. AI is not replacing the human but rather interacting with humans and that this is really something we should pay greater attention to. And I think also that we really see an area where there are no specific rules or principles really to govern anything that militaries do at the moment. So I think what we also need are clear safeguards and principles about use contexts, about target profiles, about durations of use. All these kinds of concerns I think have to be put out there and also in terms of how we ensure so that rather than completely degrading human agency or human skills and warfare that these are kind of safeguarded and maintained.

Harry Kemsley: Thank you very much, Sean.

Sean Corbett: Pretty much what Ingvild said actually that we've got to have as much autonomy as we possibly can to both speed up the process and deliver the effect that we want to. But we have to have those checks and balances in place to make sure that this doesn't run away and get completely autonomous, which means, as we just said, human machine teaming and we're not there yet. I really don't think that... there's a long way still to go. I think before we are very comfortable in terms of how we apply AI to any military situation.

Harry Kemsley: Yeah. I think I'm going to sort of straddle both actually, Ingvild and Sean, I'm quite taken by this point about moral agency and the need for us to understand what that really means in deliberate, standardized and stable things like process. If it's a tank and you're trying to identify a tank in a battlefield so that it's the target you're looking for, that's one thing. If it's a group of people behaving in a certain way in a certain part of a town you're trying to find, that's a different situation. And I think we need to be very clear about that, to the point that you made Ingvild, moral agency and the use of AI, that's for me probably the most important takeaway for me that I'll want to ponder on. But as ever, time has evaporated on us, a very, very interesting conversation. And I know that I say this at the end of every episode, especially when we have guests, but there is so much more. I'd like to discuss everything around what we've just been going through there. Ingvild, thank you so much for bringing your expertise and experience to this conversation. It has been fascinating. Sean, as ever, thank you for your counterpoint and the occasional rough diamond comment that you haven't thrown into many of those recently. We have to see if that's because of the weather's been keeping you quiet. But in all seriousness, if any of the listeners have any questions on this or any other podcast that we do, please let us know. We'll happily pick them up. We have engaged with one or two listeners that have come forward with some ideas for things for us to discuss. I'm happy to do so. And if anything you've heard today generates a lot of questions you'd like to discuss, let us know. We'll see if we can pick it up. Let me finish where I started. Ingvild, thank you so much for taking the time to speak with us today. A really, really interesting conversation and I'm sure one that we will need to revisit as the AI becomes a bigger and bigger part of what we're doing. Thank you so much.

Ingvild Bode: Thank you.

Speaker 1: Thanks for joining us this week on the World of Intelligence. Make sure to visit our website, janes. com/ podcast, where you can subscribe to the show on Apple Podcasts, Spotify or Google Podcasts, so you'll never miss an episode.

DESCRIPTION

 In this podcast Harry and Sean are joined by Dr Ingvild Bode to look at the application and challenges of AI use in weapons systems.

 

Dr Ingvild Bode has spent the last year researching this subject for her most recent policy report, Loitering Munitions and Unpredictability: Autonomy in Weapon Systems and Challenges to Human Control, co-authored with Dr Tom Watts. During the podcast Harry, Sean and Dr Ingvild explore how AI is being used today to supplement or delegate not only motor skills but also cognitive skills. They also explore how AI plays a role in how decisions are made about specific aspects of the targeting process.

Today's Host

Guest Thumbnail

Harry Kemsley

|President of Government & National Security, Janes

Today's Guests

Guest Thumbnail

Dr Ingvild Bode

| Associate Professor at the Centre for War Studies, University of Southern Denmark
Guest Thumbnail

Sean Corbett

|AVM (ret’d) Sean Corbett CB MBE MA, RAF