Details
Nothing to say, yet
Details
Nothing to say, yet
Comment
Nothing to say, yet
The CISO discusses the impact of Gen-AI on cybersecurity. It is being leveraged by attackers for deep fakes and synthetic identities, posing a challenge for security. Phishing attacks are becoming more convincing, and attackers can easily create prebuilt GPTs for phishing emails. Security practitioners should consider governance and compliance when adopting Gen-AI tools. Unused identities pose a risk in cloud environments, and organizations should prioritize managing and monitoring identities. CISOs should also think like chief privacy and risk officers to protect against leaking company secrets and potential attacks. From the CISO perspective, the landscape is changing and it's changing quickly. And what we're faced with now is how do we adopt Gen-AI tools because it's really not an option. You really have to look at doing it because that's what the attackers are leveraging. And we, as the security community, need to get a little better at collaborating with stakeholders and figuring out how we get to a yes. Hey everyone, I'm Brad Busse, Chief Information Security Officer here at E360. Thank you for joining me for the State of Enterprise IT Security Edition. This is the show that makes IT security approachable and actionable for technology leaders. I'm excited as this is our very first episode where we discuss three timely topics. The first one, Gen-AI and how it's turning the security landscape on its head. Two, the risk of unused identities. And three, the increase in ransomware attacks in 2023. Now, we did record the show at the end of 2023. And I'm sure you're going to hear me talk a little bit about the holidays. And you're probably looking at your calendar saying, wait a second, it's 2024. But honestly, everything you're going to hear is still timely. So with that, let's get started. I wanted to spend some time talking today about three different topics. So I spend a lot of time talking with clients as well as reading clips in the news. And I came across a couple of things that I think you would find interesting and hopefully useful. So Gen-AI, I think everyone has heard a little bit about it. Or if you're like me, you've heard a lot of it about it. So how is this really impacting CISOs? Because I keep hearing this term that it's turning the cybersecurity landscape and the CISO role on its head. So there was an article that I was looking at. And I did a little extra reading because I was interested in, you know, where they got some of their facts from. And I figured I would share some of that with you. So for those of you that don't know, Gen-AI, it's a type of artificial intelligence. And what it's good for is creating new types of content. And that is things like images, text, audio. But the thing that you need to remember is all of this is based on some kind of existing data. So we're giving it an input. And that's why when I read generative AI, I'm not thinking artificial intelligence. I'm thinking augmented intelligence. So it still needs some form of input. And in order to get what you're looking for, you have to give it more data. And sometimes you have to give it a lot of data. And that poses a cybersecurity challenge. So when it comes to the way that I would say some of the attackers are leveraging this type of technology, they're doing things like deep fakes. So they're creating, in essence, for me, it's my face, it's my voice, and it's very difficult to tell who's real and who isn't. And then there's synthetic identities. There are people that exist that aren't actually people. They are AI-generated, voices and faces. And what I'm constantly faced with as a CISO is things like automated phishing campaigns, where there is an AI that is being fed information from someone nefarious, could be someone overseas, onshore, you honestly never know. But maybe English isn't their first or, in some cases, second language. But they're able to feed these new gen AIs and large language models information, and it will craft that phishing email for you. So I'm going to talk a little bit more in the show about a few of the ways that these organizations are leveraging some of the technology out there, like chat GPT, where they're creating their own GPTs. And I'll tell you a little bit about what that means. So from the CISO perspective, the landscape is changing, and it's changing quickly. And what we're faced with now is how do we adopt gen AI tools, because it's really not an option. You really have to look at doing it, because that's what the attackers are leveraging. And we, as the security community, need to get a little better at collaborating with stakeholders and figuring out how we get to a yes. And I talk with my team and clients about this often, where in the CISO or the CISO role, often people look at us as the ones that say no. And what I'm trying to get to is a yes. And I talk a lot about the statement yes and. Yes and, we're going to get there. It may not be the way that you are proposing. However, we can solve that problem. So I think really the future of gen AI is going to continue to evolve. It's going to impact cybersecurity both positively and negatively. And I think it's creating opportunities and risks for the CISO role and security roles in general for organizations. So things I want you to take away from this is phishing attacks. They're getting more convincing. And they're getting a lot easier for attackers to create. And there's basically exchanges now on the dark and deep web where they can go and just get these prebuilt GPTs. And there may be built only to do phishing emails. And they start off with basic phishing. Then they move into spear phishing. And then it turns into a persistent threat scenario. So these are things that, you know, we need to leverage tooling that's out there. It's not all doom and gloom because a lot of our favorite tools have been leveraging machine learning on the back end. It just isn't as buzzworthy. So there hasn't been a lot of talk about it. But I think you're going to start seeing more from some of the vendors out there. Now, I always try to keep this show pretty vendor neutral. If you ever hear me talk about specific technologies, it's going to be rare. But I may end up using some of the products as just a talking point. So the things that I want you to take away from this is where, as a security practitioner, should you start? And if I'm looking at this as I would look at any security program, really it comes down to governance and compliance. And that simply means what is allowed, what's not allowed. So if I'm looking at this from my employees, what is it okay for them to do? Should they be able to use Chat GPT? Should they be using Google BARD, now with Gemini as a piece of it? Should they be using the Microsoft Copilot and that whole Bing Chat component? That's going to be up to you to decide. Maybe you want all of them. That is really just the Gen AI side of it. Then it's going to come into what's allowed from the machine learning or the overarching AI inside of the organization. Is Google Vertex going to be allowed? Is it going to be Copilot? And as security practitioners, we have to think about this and we have to realize what is the overall risk to the organization? So we have to do a little more research and understand what is possible. And I know when it comes to Microsoft Copilot, there are a lot of individuals that are looking at it and thinking, this is great. However, it has access to all of my Microsoft solutions, which means data, as well as the identity side of things. So you can imagine what bad things could happen should an attack be designed specifically to leverage something like Copilot. Because I often say one of the main things that organizations struggle with is simple things like who has access to what? What should they have access to? Where did they get the access? What are they doing with the access and should they still have it? So one of the things that I've found is that when you start peeling back that scenario, and if I'm giving Copilot the ability to help me, it could actually end up hurting because maybe my unstructured data isn't very well partitioned, meaning I don't have the right groups and roles in place to start. And I see this all the time with a lot of my clients where they were just given a role because they were copying somebody else. And that could be something that's been there for 5, 10, 15 years. You never know. And what you get is just this creep of access. And the next thing you know, just because you have access to something doesn't mean you necessarily should. So there's a lot we could talk about in that particular area. I don't think we'll go into it today. The thing that I want you to think about is how can we innovate quicker and leverage the AI, the machine learning, really that whole Gen AI wave that's happening right now, and how do we implement and integrate it into our protect surface? So those are things that we'll touch on, I think, in some episodes as we go through. But one thing I want you to look at is CISOs really need to start thinking more like chief privacy and risk officers as well. So if your organization doesn't have one, start thinking that way. Because really what it comes down to, if I have to feed this Gen AI wave with information for it to do something, I could be leaking company secrets and not knowing it. I could be helping the language model learn more about my organization so then someone else could use that data and design a way better attack against me. And it may sound harmless, but it could definitely happen. And there's a good use case here if you do a little bit of research and look at things like fraud GPT or hacker or hack GPT or worm GPT. These are hacker-level tools that have been designed, and they're basically using similar, if not the same, large language model as chat GPT and others. So just be cautious, but definitely start with a governance program, and I think that would be a good place really for anybody. Changing gears, let's look at something else that I was reading the other day. And, you know, everybody always talks about the elephant in the room. And this one was kind of funny from a title perspective because it said, the elephant in the cloud, the risks of unused identities. So I'm sure a lot of you have heard identity is the new perimeter, and really that's a true statement. And I think many organizations that I talk to, they struggle with things like unused identities or identities that maybe have fallen off of awareness, ones that are default to a platform or to a specific cloud. And a lot of the times, you know, I feel like there's an inherent fear of changing something that comes with a system. Or maybe you're trying to solve a problem with inadequate tooling because perhaps you can't afford it or you've been told, hey, you can manage all of these things with, you know, just the native tool set. But what you might be lacking is things like auditing or automation, custom controls. And I find a lot, people just haven't been trained. It's assumed that it's easy and that you know what to do. But that's not always the case. So when I'm looking at this, I'm thinking of an identity as a couple different things. It could be a human user. It could be a machine identity. This is something like maybe a service account, something built directly into an application. I think the human user is probably the easier one to identify, but not always. And I think when I've talked to clients, you know, inadequate tools focuses on things like I was saying before. I've got an identity platform. Maybe it's given to me by Google. Maybe Microsoft could be AWS or others. And perhaps I'm trying to manage that, but it's not as effective as if I were to look elsewhere. Maybe I'm in a multi-cloud scenario. I think a lot of us are anymore. And when I start looking at how many of these I'm able to leverage, the list gets longer and longer. I know for a fact there's a lot of fear in something could break. And there's a lot of inherent identities and roles and permissions and things that are just set up as part of an initial identity and access management implementation. And if you look at how a lot of the big platforms talk about what you should do, many of them say, hey, don't use any of the default stuff. And me, having been in a lot of these different environments, I'm like, then why is it there? And can I go ahead and disable that stuff? And it's funny because in some cases that's a yes. In other cases it's a no. That's going to break things. So to me, as a security practitioner, I'm a little concerned about that kind of stuff. So just being able to find a way to mitigate it I think is important. But if I'm thinking kind of beyond all of this architecture and technology, I think auditing becomes very important when we have this risk of unused identities. So I want to know if something hasn't been used in quite a bit of time and all of a sudden it pops up. And I said this before, just because you should be able to access something doesn't mean it's something that needs to happen. Or just because you can do it doesn't mean you should do it. So having auditing, having automation, and I talk a lot about this when it comes to lifecycle management. And that is things like provisioning a user account, reprovisioning access. And I think one of the most important is deprovisioning. So when someone leaves an organization, we need to make sure that is all triggered, it's automated, and we can say definitively that that identity has been deprovisioned and all access has been removed. And I will say the most challenging thing when it comes to that deprovisioning process are all of the ancillary applications. Because not all applications leverage a common identity. There are still ones out there that use a username and password. And just because I deprovision the person from my environment doesn't mean it automatically reaches out and takes away their access to Salesforce. It could be a long list of applications. And I think that's something useful when you're thinking about, you know, how do I stay out of trouble when it comes to identity? And I think really adopting that comprehensive, automated, and educated approach to lifecycle management. One of the other things that I was looking at this week was I think as we're getting close to the holiday season, there always is kind of a wrap-up of the year. How did the attacks of the year impact us? And I found a couple of articles that were really focused around the big hacks that have, in a lot of cases, surged in 2023. And I'll name just a couple. We've had Clorox recently. We saw some things come out about Boeing. MGM was a little bit earlier than that. And it's interesting because CrowdStrike is saying that there's been a 51% increase in ransomware attacks since 2022. But I think we all have to take a look at 2022 in general and think, well, why is that? Because experts are saying 2022 was actually a lull in the overall attack when it comes to the United States, Canada, the UK, a lot of the industrialized nations. And I think if you were to look at what happened in 2022, that was when the whole Russia invading Ukraine scenario played out. So I think if I'm putting on my critical thinking hat, I pretty much know where all of the resources were going in 2022. And I think in 2023, what we're seeing is these hacker groups are back needing to make money because there's been some things that have been done. I'm not going to get into a political or a geopolitical conversation. That could be a different show. We'll see. But essentially, there was a lot of distraction happening, and now they're out to make money again. So if I'm seeing what's happening, there's been a lot of ransomware. And it's so funny because I'm sure you're sitting there thinking, ransomware, wasn't that so like 2018? It was, but it still hasn't necessarily gotten any better. I mean, there was a large Australian port that was paralyzed from ransomware. Couldn't ship, couldn't really do much of anything. Las Vegas casinos were basically havoc was being wreaked by some ransomware. You heard me mention Clorox. There was a shortage of goods caused by ransomware for Clorox. So there's less sanitary wipes and other things. And then there's been some disruption in treasury markets. So I was looking at this, and I saw that there was a 33% increase in just the three quarters of 2023 because we're not done yet. We're still in the fourth quarter. Over 2022. So the targets continue to be U.S., U.K., Canada. India is a growing market for these attacks, Pacific Islands, Africa. So what does this all have in common? That's where a lot of the money is. And what is ransomware after? It's not always looking at destroying. And I would say most of the time it's designed to make money. So they want the victim to pay the ransom. And I'm sure you're wondering, like, why hasn't something been done about this? And you have to look at how the tactics of these ransomware groups really unfold. It's honestly guerrilla warfare. They come together in these small surgical strike-type cells. They're active for a while. They make money. And then they dissolve back into the ether. And then they'll reassemble later as a different group. And it's not always the same people, as we've discovered. But they typically act like a franchise of a much larger organization. And they're getting maybe some of that initial funding or tooling or other things from the dark slash deep web. And there's this whole healthy exchange. And I'm saying this with a smile because it's not healthy for those that are victims. But if you are an attacker, your job is getting easier and easier. And some of that we already talked about when it comes to, you know, leveraging the whole GPT and the Gen AI tools that are out there. So when I look at kind of the year in review, the hacker groups that are most active have been LockBit, ScatteredSpider. And ScatteredSpider is interesting because this particular group, they are very focused on social engineering. So they're calling help desks. They're calling the, you know, kind of the IT service, NOC, SOC, claiming to be a user. I got locked out. They have just enough information where they can get a password reset or a token reassigned or something to gain access to a system. And then what do those attackers do? They move laterally. So that is what happened to Clorox. That's what happened to MGM. We can dissect the anatomy of that attack, I think, on a different show. But really what happens is when the ransom is paid, that's kind of what we don't want to happen. But in some cases, these organizations don't have any other choice because they didn't have a good, robust backup strategy. And there's something that I've been talking with clients a lot about recently, which is building a resilient infrastructure. Resilient cloud, resilient DevOps, resilient applications overall. And what we're trying to get to is you should be able to suffer a complete loss and be back up and running in a predetermined amount of time. And it shouldn't be months. It shouldn't be weeks. It shouldn't even be days. It should be a number of hours. So there's a lot we could unpack there, too. But what I want you to take from this is these hacker groups are operating typically in nations where law enforcement is very, very difficult to implement in these regions. And they're preying on our desperation of continuing to conduct business. And that is why this is such a large problem, because there's so much money to be made. When you're looking at organizations like MGM, the figure hasn't been disclosed, but the idea was that it's tens of millions of dollars. And can you imagine if this hacker group is just three people and it's all paid in Bitcoin? I mean, that's like set-for-life kind of money. And not in a good way for us, but something that we should be thinking about. So don't always think of this as necessarily just the monetary side of things. There's also the threat of leaking sensitive information. I was just at a conference a couple months ago, and we talked about a third type of ransomware that's starting to appear, especially in health care scenarios, where they don't steal your data. They don't lock you out of it. What they do is they actually change key parts of the data, and you're not sure what they changed. So you can imagine how this would impact a health care organization where let's say that they're changing patient data. They're changing maybe how much potassium you're supposed to give a patient or what dose of medication. And can you imagine if they would then come at you with the ransom and say, we changed a bunch of stuff. I'm not going to tell you what we changed. But in essence, if you don't pay this ransom, people will and can die because of the information that's been changed. Kind of scary. So these are things that I think are driving this whole unhealthy attack surface. And there's a lot of things that can be done about it. I think what we need to do as a security community is dissect some of these attacks, look at what happened in 2021 with the Colonial Pipeline, look at Clorox, Shell, MGM, ICBC, Boeing. I mean, I can keep naming names, and I feel that's part of the problem. If I can continue to name these names, why have we not learned our lesson? And as a security practitioner, I can obviously, hindsight is 20-20, I can look back and say, here's how we could prevent that. So I think we have a duty as security practitioners to speak with our boards and talk about the very real challenges that could take place. And that's kind of, I think, where in life the problem. Most of the time, I think security is looked at as insurance. Well, let's just get that cyber insurance. Well, let's just do the minimum so we can pass that audit or be compliant with SOC 2 or ISO 27001-2 or NIST, but I can keep naming frameworks. And I think we're missing the point. I think that ultimate point is for us to create that resilient organization and to properly mitigate risk. And I feel like we just need to do better. So I will give you a little silver lining about this whole ransomware topic. I was looking at this organization called the Ransomware Task Force, and they're a nonprofit, and they have 48 actions that a public or private organization can take to help mitigate attacks. I think the information is great. Some of it you may know. Some of it you may not know. So I would urge you check that out in a search and take a look at it because I feel like this problem isn't going anywhere. It's going to continue to be in the headlines. And I would much rather us all take a look at this and be armed with a little more information. And it's hard. I mean, some of us have shifted to a work from home, and some of us are at home or, as I like to say, we're anywhere. But we're maybe anywhere only a couple of days a week. But that in itself has made our attack surface as organizations that much bigger. So that's why I'm urging we need to bring security closer to our endpoints, closer to the application, closer to the data, and we need to move away from this, and I'm going to say, old way of thinking, which is the castle moat perimeter type approach. The buzzword of last year was zero trust networking. The buzzword this year is AI. But I think if I'm going to leave you with anything, it's that we need to stay focused on a true zero trust approach and start to treat our internal networks just like the Internet and protect our devices, users, and data a lot better. So thanks again for joining me today, and we'll see you next time on the show, which is the State of Enterprise IT Security Edition. Thank you. ♪♪♪