Home Page
cover of Getting Risk Right and Wrong Part 2
Getting Risk Right and Wrong Part 2

Getting Risk Right and Wrong Part 2

Beth Sizeland

0 followers

00:00-29:07

Nothing to say, yet

Podcastspeechinsidesmall roomsilenceclicking

Audio hosting, extended storage and much more

AI Mastering

Transcription

The main ideas from this information are: - The analysis of risk involves looking at people, information, and systems. Systems can be hindered or enabled by understanding risk. - Disconnections between people, institutions, and information can lead to difficulties in solving problems. - It is important to be aware of one's role within the broader system and to connect with other professionals who can benefit from one's perspective and information. - Technical compatibility between IT systems is crucial. - The scale at which work is done affects the ease of operating systems and connecting with others. - Professionalism includes training, awareness of broader systems, compliance, and creating an inclusive, anti-blame culture. - Oversight and accountability help maintain standards in environments where problems arise. - Governance, policy, and decision-making processes also play a role in understanding and addressing risk. - Articulating risk in a digestible and usable manner is importa Okay, so we've looked at people, we've looked at information, and the third component of this analysis is systems. Systems can either mostly be inhibited or mostly enabled by the understanding of risk. And this is one of the things that comes up time and time again in firing these ambitions and so on. Existence of systems is one of those areas, and it's often deemed inappropriate when things go wrong. So that can be disconnected, it can be disconnected people, disconnected institutions, but very often that kind of pondering between different entities, jobs and performance, and information slips through the gaps. This is a very human challenge in some ways. How well aware are we all, for example, about where we sit within the broader system? What are our responsibilities to reach out and connect with other professionals who may benefit from our perspective and our information? But this can also be a very technical problem. Between two IT systems connected with each other, are they compatible? I can't say if they're compatible, but of course all apps So, the accessiveness or the accessiveness in relation to people and perspectives or information slips can cause real difficulties or really help you with being able to solve it. So, the other kind of aspect of this is the kind of scale we're operating on when we're doing this kind of work. I think this is a band that can go down relatively small scale compared to, so, not even on a human and personal level, 10 million people are working here. Much easier for a number of procurement systems to operate in connection in order to get into production and work on technical systems. So, that really does help. How does training and leadership engagement on this? So, expectations of individuals and teams, so that they are able to connect with the system, i.e. the system in order of individual apps. Connection, that's really important. Professionalism, and there are a bunch of things that you would expect to see in a professional analysis system. There's a structured institutional approach that are taken to improve the quality of making media for analysts, search engineers and decision makers. So, the sorts of things we have here are training programs to develop analytical skills, awareness of broader systems, compliance. Professionalism also includes this really important notion of culture as a kind of inclusive, anti-blame environment and one which people can learn and share. And there's very, very much to achieve, particularly in the context of violence, and I think that's shown this over the weeks. The US has deliberately selected analytical apps in order to kind of improve quality and diversity of views across the system. High-quality people, really important in professionalism. So, if you're leading them to do some of the basic corporate tasks well, it doesn't matter how well they or, you know, understand the capitalism environment, they still have to do the hard business of running all the issues well in order for them to be professional rather than achieving professional results. And another component of it, I would say, is oversight and sensibility. So, that's the very involved thinking about what counsels you. If someone bodies to the public, whether that is through judicial review or whether that is through oversight bodies in Parliament or Senate, all of that helps you to maintain standards in environments where generally the world heart kind of looks inside to see where you're going in such a visible way as you can to solve some of those problems. And as a final jump, this is a policy in process example, where I think it's a little bit too obsessive, but I find it comes up in inquiries more frequently than not. By quality and diversity, there's a whole bunch of stuff that could be put into the picture. I'll give you a couple of examples. So, governance. Governance of a topic can help to think about how well risk is understood and addressed. And we talked about that on policy of risk. So, how you authorise things is really important. On China, for example, in a government system, you could chair your meetings to understand the kind of risk, either through a kind of foreign policy-led entity or either through a domestic security system. And those two approaches will get you different people in the room, different inputs, different outcomes. So, thinking about that, I think that's a kind of approach in governance, really, what do you want to get out of it? What are the elements that you think will be optimal? Policy in process is also a part of things like security policy. Protective measures. It's very useful to protect information, but as we were talking about the text-less issue, it can go rather far. If you go too far, your intelligent organisation will be shut out and beyond. So, generally, what you want is a policy of protecting the things you want to protect, enabling and encouraging analysts to have a varied set of information in front of them. I will need to leave this at your disposal, but there's a question, if you do, on optionalism that also will help. Okay, let us move on. So, you've gone through the plan of the development of risk. Once you've made your assessment, the way in which it is presented or articulated is also a really important factor in determining whether good decisions are made. So, policy makers and decision makers need a lot of support, whether that is in person or whether that is in the format you're using. It has to be a digestible, usable, and it sounds really obvious, but it's actually quite a complex process. How far do you go towards simplifying what is a very, very complex picture? How much complexity do you strip out? How do you represent that complexity versus reducing something to a two-page document? You know, how applicable are things like length of schedules for you to say something is typically likely to happen, and I might think that means it's likely, and you might think it's not very likely. So, length of schedules are useful, so it's difficult to predict how individuals will interact with that. Use of visuals and data are really, really compelling and important, particularly during the pandemic. We've had very specific examples of how this would be used to drive policy. But, again, it has to be tabulated, it has to be explained, and often you have to interact with some of this data so that you're not kind of left as an individual to draw conclusions. Everything has to be put within its context. The other thing to bear in mind here is, you know, we talk a lot about articulating risk as a product, and that is because even though there's a concept of screening it, there are jobs, and we sort of imagine that kind of drafting and publishing role. And that is true, and that is still really important. But increasingly, products have to go alongside discussion and engagement where that is possible, and it isn't always possible. There are many people, including the politicians, that have to bandwidth for that. But on really key issues, the ability to kind of read, interact, digest, discuss, challenge, that back and forth is a really special part of how decision makers absorb and use the information that you're putting in front of them. And increasingly, as technology comes into this space, we're going to find different ways of doing that, different ways of enabling our decision makers to assess what we know. Okay, so, key question here, as we look at all the reflections on this, but why are there so many surprises? Well, you can see from what we have just sort of walked through, and you've clearly got the people, information, systems, there is a lot that can go wrong. And it's not literally one thing that goes wrong, because as you can see, all of these things are interconnected, and there are indirect consequences from one to another. There's a lot that can go wrong. It's very easy to try and be consistent, and it is impossible to really eliminate the risk of something failing. But we can, right now, and we can see to understand, and we can see to address some of these things as they happen. Sometimes, you actually get to the point of really effective assessment, but it hasn't been assessed with the right people. It hasn't been digested and understood and listened to. It can live with the system, but not have any use under a whole variety of reasons, so to speak. The article on 9-11 is not a very good example of people who really understood this, were articulate in it, but that went nowhere in the system because of their position within the organisation, their status. We have seen the exact same thing since October, and that is very interesting and instructive, to look at who knows and why in the system. The other thing is, we are a largely optimistic bunch as humans, and we don't want things to go wrong. We're irrational. We are surprise elephants, because who, apart from the supposedly obsessiveness, would want the same basic success thing, awful, to happen? The other thing is just that the variables in front of you are just unpleasantly large, and let's not get into that kind of logical fallacy. So much is not logical, is random and unpredictable. So, you have to find that sort of enticing possibility. It's very hard to kind of properly and correctly assess everything that's going to happen in the future. Randomness is a major factor. That said, surprises are often turned out not to be surprising when we've got a high final, we've got these measures in front of us, and we can sort of retrofit the analysis that we've come up to, and that's why commissions and inquiries are helpful and we can learn some lessons. Okay, so I'm going to take a really, really short FW here. I'm going to show some of the weaknesses that we've been talking about. The participants involved in this particular assessment, the series of conflicts, have had really dire consequences. You know, the humanitarian toll has been, when we found hundreds of thousands of people dead, millions of people displaced, complete and total ruins, and we haven't yet seen the conflict at the cost of that. We're seeing the fallout of that even now with the Oslo and Assad regime. And, of course, we shouldn't forget the rise of ISIS and the spread of extremism and the external attacks that happened in the mid-2010s. So, a little summary of what was missed here. As I was just talking, think about that sort of people, information, and systems. Okay? So, the first thing that went wrong was a real list of nuance in Western capitals thinking about the Arab Spring as it starts to evolve in different countries with different contexts. Tunisia and Egypt are very, very different to Syria. It's a geopolitical context. It's an area that's usually used by heeding power and so on. So, we didn't, we sort of defried it, you know, the Arab Spring as if it was a single thing, when in actual fact it was an extremely kind of bogus and sort of complicated set of events. We didn't have enough information, or at least the right kind of information to understand what was going on. Intelligence resources and intelligence requirements for that region have been cut so that they've been really focused on other issues. So, as you can see, we have AQ in Afghanistan, and then we have AQT in Yemen. You know, even from a kind of terrorism side, the regional focus was really quite different. And on top of that, you have some big kind of traditional kind of power and resources also focused there as well. So, lots of kind of both human and technical intelligence there. And, you know, normally what we would say now is that you're open to all sorts of intelligence resources in a situation like this where you necessarily have to prioritise intelligence at all. Because at the time, open source information really wasn't, or open source intelligence and the amount that you can get from that was not really taken as seriously as it should be as an input. It wasn't highly valued enough when there was a huge amount of really rich and valuable information on there. So, things like the social media feeds on the ground at the time these events were unfolding had a lot more profit for our analysis than we realised. I think that's inserting as well that we were asking the wrong question in Syria, or we were asking more than one question. Syria was really, the events around it were treated and thought about in terms of a kind of plurality of events issue and challenge. But, in actual fact, all of the impacts for organs like the UK, of course in Europe and the US, ended up being domestic security issues and counter-terrorism. You know, there were some very direct attacks in our history last week, but there were other forms of that horrifying and gruesome and returning to nearly completely dangerous and radicalised and causing behaviour and sex. And then on top of that, so it wasn't just a counter-terrorism or a kind of military challenge, it was also a huge migration issue. So, we saw mass migration causing significant cost of tensions in Europe. And it also didn't think it expansively as we should have done around Syria's partners. So, Iran, Russia, Russia. Over time, how would that play into and expand this risk to the West? It wasn't going to be dealt very well. It went from a relatively isolated Friday, up to a kind of proxy war for the US with Russia and Iran that hadn't really worked out. So, we weren't asking all those questions. There was a bit of politics in my head as well. You know, the understate of the brutality, the aftermath of the brutality and the determination of the Assad regime, which was perhaps a leg race to a lot of its own cities, to protect its own civilian population, including Jordan, with chemical weapons. So, I think, assuming that that wouldn't happen or that everybody left Assad, that was the wrong call and was it. And we also were hopeful that we were coming out of the sort of really intense phase of terrorist threat at the al-Qaeda core, have been complied and able to get contained and diminished in many ways. So, we were kind of wondering about how to get out of this situation when, in actual fact, there was another iteration of it just around Cornwall. And we've lost the discussion. So, what was all the information being shared between counter-terrorism communities and anti-terrorism communities? Were there rights in the room? Were there strategic visions? Were there any knowledge on the potential domestic consequences? So, there was a question there about how connected and how willing can we be to access different parts of the system as well. Okay. Right. So, lastly, you know, just to kind of see through the whole, sort of, the formulation of, so, the whole piece of information. So, if I will put the subject on to you. I'll go through that and take any surprising points I've put them up here, like, you know, use your imagination, pick something you're interested in. And then, if you want to work alongside, if you'd like to have, like, a chat in pairs, that would be great. And I assume in short, you know, I don't want to be doing, you know, tons of work here, but just three thoughts to this. So, again, we're going to sit down, so clarity, brevity, as we will with our phenomena, and frame the event. Tell me things that happen normally that are consistent. And I want you to look at some of the weaknesses that you either, kind of, diagnose from what you can see in open source or can expect that might have happened, just based on the framework. And then have a think about, you know, just have a look back at these, kind of, digest and use that framework, and unpack it, or do you think it would be more subjective, or maybe that's what it will feel. And then, you know, imagine you're talking to an inquiry like this, and when I say formal, it can be one or two. I don't want to just come up with a practical recommendation that might help them, you know, not go to that place again. So, that's your homework. And once again, you've got the time of week to do it. Actually, if somebody asked for an extension due Sunday and made that as their slide to do, I'm happy for you to take the weekend next week if that's easier for all of you, or if you want it to be on Friday, that's also fine. Okay. Right. So, we looked at practice, effecting, decision-making, and those kind of approach apply here, too. So, we might get the assessment role, then, you know, if you have a great assessment, decision-making does not happen in our lovely analytical high-quality tower, you know, lovely grids, lovely floor, great, but then there's a lot of this stuff outside that just played in, and legitimately are kind of alternative concerns to just the pure analysis of risk. The other thing is, you know, officials are right. Politicians tend to decide, you know, they are a helpful method. People within democracy, so we're all playing different roles there, and although it may feel counterintuitive not to act upon risk, politicians have a whole set of different equities that they are playing with, and they are ultimately the ones that, you know, taking responsibility if they are voted out. Yeah, and then you also have a look at how much of risk you can actually influence, or it should be less, it shouldn't just be saying, well, actually, it costs so much to reduce the risk, maybe we should just assume this is going to happen. So, things like, and this is really interesting, things like earthquakes and natural hazards or wildfires. So, instead of just kind of reflecting like-for-like to get your insurance and rebuild your house, actually, when you start to repair and rebuild resilience, are you doing something different to make that less likely? So, preparedness is one of these. And then we looked last week at what kind of essential levers you can use to reduce various factors of risk. Okay, so, the comments from the example published, it's much easier to pick up that one, but also, risk doesn't manifest itself. I mean, like some of the, you know, you can have one bad day and the world is in debt to the security environment. But, there are lots of examples, many proverbs, I would love to share, that you use, but can't, and didn't actually trigger me on this. So, I was, I think, 10th barrier in London, I think, for the first 45 decades that London would be flooded if we're regulating the 10th. So, that's projected in the 1950s. It is up for renewal. We'll need looking at it again for the next kind of 20, 30 years. But, essentially, that's, you know, a natural hazard over a really long period of time, well-managed. Broad intelligence attacks, another good one, in that it's, even before we saw the Paris Affirm attack, in the UK, we have, you know, we've understood the need to have really rapid response, armed response teams to react, you know, within minutes, as we've seen in major cities. And, it was really hard to get those teams in place, train them up, and have them deploy alongside other types of emergency service in a crisis situation. But, that hard work was done, and everybody's in tact. You see these things, you know, on the scene, but extremely badly. But, it stopped the attacks from happening, but it does contain consequences. Oh, yes, you've heard of it. So, this is a really interesting one. You're all actually aware of it. That's what I mean. It was a world-bending plan. Computers reset their time, zero, zero, zero, zero, some of the millennia. But, actually, of course, what ended up happening was so much work was done on it. Everybody had initially thought that it would be, it would disappear. That's what it was going to happen. But, probably, there would have been a lot more destruction had people not prepared for it and taken it seriously and deployed all of their corporate IT teams to look at this and try and manage it. It's hard to say how destructive it would have been. But, certainly, some risk was walked out by that foresight. And, then, we took that into a pre-operation test with JCPOA. And, with carpet, it's equal to our notion that we're not that far off from nuclear arms. Around China, we've been producing it. It's nuclear in its stores, as well. So, it is always a sort of balance here. But, we have got some examples of where nuclear proliferation has been at least partially successful. Well, that's all right. We've finished, relatively, with a drop of the hat. I shouldn't smart-mouth that. Takeaways. Very easy to throw stones. Understanding this is an incredibly difficult business to be in. If you don't start being wrong, then you should, obviously. But, understanding what we're doing now is hard enough. Predicting the evolution of this and predicting the future is extremely difficult and, kind of, fraught with error when necessary. It's about misjudgment. It's about nuance. When things go wrong, there is very rarely a single case that this happens. It's a complex infection of people, systems, information, all contributing to either an infected machine or with problems. And, when you're taking decisions about risk, context is everything. We've sort of did all that yesterday. And, where the responsibility lies is an important thing. You might get frustrated that officials in my business, politicians, can't, ultimately, account for what happens. Okay. So, that's it for you and Rochda. You've got some, I'm going to keep going away. If you need to reach out already as well, please do. If you want to have a chat, I'm only looking at this month, but I am very happy to speak to you. If I have to drop a call or whatever, do give me a shout if I can be helpful. I am shaping up a program for March, which is going to be really good. And, we've got a couple of specials coming in. And, we feel it's appropriate to say that this one, but I think you're going to enjoy that. And, I will let you know as soon as I know what the timings will be for that week. Okay. Thanks very much, everyone, for reading your emails, and see you in March.

Other Creators