July 12, 2024

00:41:32

Episode 268 Deep Dive: Matt Preswick | Democratising Cloud Security – Will Security Become the Enabler to AI Usage?

Episode 268 Deep Dive: Matt Preswick | Democratising Cloud Security – Will Security Become the Enabler to AI Usage?
KBKAST
Episode 268 Deep Dive: Matt Preswick | Democratising Cloud Security – Will Security Become the Enabler to AI Usage?

Jul 12 2024 | 00:41:32

/

Show Notes

In today’s episode, we’re joined by Matt Preswick, Principal Solutions Engineer from Wiz, in the company’s first podcast appearance, to talk about cloud security and the intersection of AI with security in organizations. Matt emphasizes the critical need for evaluating security risks and compliance states within cloud infrastructure, addressing the potential for false positives in identifying security vulnerabilities. He also sheds light on the challenges posed by cloud-native threats, urging organizations to integrate security into early-stage application and infrastructure design cycles for efficient prevention of incidents. Matt’s insights underscore the importance of collaboration between security and AI teams, aligning initiatives with organizational goals and customer needs.

Matt Preswick is the Principal Solutions Engineer from Wiz in APJ, with experience in network, email and cloud security at leading security vendors in EMEA and APAC. Before joining Wiz, he lead operations at a Sydney based network analytics startup before returning to Cloud security with Wiz.

Matt is passionate about developing scalable and actionable security practices within APJs largest organisations.

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Everyone has to be security conscious. You know, we're all in that same team. And to facilitate this is for security and application teams to be cohesive. And more importantly, like on the same team, the security team are not there to make your life difficult. So it's about coming together and having that shared source of truth. And then more importantly, security then becomes that internal expertise engine, a consulting function with an organization to say, hey, we've got all these best practices for you to do. We don't need to hand hold and we don't need to wave the stick around, but we're here if there's something that you really need to escalate. [00:00:39] Speaker B: This is KBC. This is a primary target for ransomware. [00:00:43] Speaker A: Campaigns, security and testing and performance. We can actually automatically take that data and use it. [00:00:54] Speaker C: Joining me today is Matt Preswick, principal solutions engineer from Wiz. Wiz is the company that everyone has been talking about, including myself. A few interesting comments about this company is their approach to cloud security drove Wiz to 100 million AAR in 18 months. Today, at 3.5 years old, Wiz has 350 million in revenue, a 12 billion valuation and recent round of funding reaching 1 billion, making it the world's fastest growing cybersecurity unicorn. So today we're discussing democratizing cloud security. Will security become the enabler to AI usage? So, Matt, thanks for joining and welcome. [00:01:30] Speaker A: Thanks, Chris. Good to be here. [00:01:31] Speaker C: So you guys are the company that people are talking about, which is why I want to get you on the show. So let's maybe start with your thoughts then, Matt, on democratizing cloud security. Like, what do you, what do you sort of mean by this? [00:01:46] Speaker A: Yeah, it's, it's an interesting one that, you know, forgiving kind of potential marketing or industry cliches. Democratizing cliches cloud security is a really kind of important kind of change in dynamic when it comes to cloud security. So the way I kind of like, talked through it is ultimately like the way cloud has been adopted broadly is in some way, shape or form democratized. Developers have the ability to spin up their own machine, start to play around, use open source technologies. It's much easier for a developer or application team to just start team querying, playing around. So my view of democratizing cloud security, or security in general, is kind of following that similar philosophy. If developers and engineers are self servicing and creating their own applications, we should also have that self service nature of security. They shouldn't need a security practitioner to be waving the stick to them and saying, hey guys, you've got to patch this or fix this, they should be able to get that information to say, hey, you might have done this a little bit insecurely. Here's how to fix it. And there shouldn't be any of that kind of conduit of the security team. And then on top of that, obviously the security team then becomes a broader strategic governance over the top to help escalate when those developers and engineers might not know the specific risk that they're introducing. And they can be that kind of subject matter expert in that part. So broadly speaking, Chris, the idea of democratizing security or cloud security is the ability for those that own the applications or infrastructure to be able to self remediate, self patch, self, kind of contain risks as they occur in their cloud environment. Yeah. [00:03:28] Speaker C: So Matt, you made an interesting point around, you know, self remediate. Do you think it's sort of better? Because at the end of the day, like, you know, no one to your point around wants to be like, you know, waving the stick, saying hey, like you made a mistake or something like that. Like I've been in teams before. Sometimes a little bit better to mark your own homework than someone else marking it for you, would you say? [00:03:47] Speaker A: Obviously everyone's kind of somewhat aware or experienced the potential cultural friction that happens with developers and infrastructure and security? That's no secret. And I think there's an element of lacking context on both sides of those teams. So I think the ability, no developer wants to maliciously introduce bad code or, or bad configuration, you know, there's just a potential ignorance around not knowing that they've done it, not knowing what the practices that they need to be following. So the ability to self remediate, it removes that tensile friction that's introduced when you've got someone telling you what to do and telling you kind of how to do things better, though, I view self remediation and self service particularly for, you know, the base level, the simple stuff, you know, the classic, you know, everyone makes these kind of, it just turns into a culturally more synergized operation. And then more importantly from a business value perspective, youve got much faster velocity. Youve got developers that have the ability to remediate in a fast time, focus on building securely and therefore be able to ship faster. And then when youve got them spending time on security, theyre having the most tangible reduction in risk in terms of when theyre spending time outside of developing and optimizing their application. That's one of the other broader benefits of going down that self remediation path is essentially, you've got it integrated, the security in the security mindset in terms of development. [00:05:11] Speaker C: So I want to ask a basic question, because, I mean, I speak to a lot of people on the show, but just generally in the market would be, what do you think people sort of get wrong about cloud security? [00:05:22] Speaker A: Yeah, it's a great question. I think there's a few elements to it. One like, obviously cloud security has introduced a lot of powerful mechanisms for organizations to move faster, develop faster, be agile in the way they operate. And it's abstracted a lot of on prem philosophies that I don't want to say are redundant, but certainly less relevant. And so a lot of people, the fundamental thing that I see is transferring a lot of the on Prem security mindset, our philosophies, to cloud. That's one element, and then more importantly, looking at cloud security as a one dimensional problem. So in other words, cloud security is how I've configured my cloud services only. Like, that's just fundamentally not true. And then vice versa, you know, cloud vulnerabilities are just CVe's on my machine. It's seldom that you see a cloud risk or a cloud incident that is involving a single dimension of risk. In other words, you know, it's very rare that one CVE on a machine is the only element that, that's led to the, to the potential breach. And obviously Australia's had half their share of cloud related breaches in the past couple years, and not one of them are a singular misconfiguration or a singular CVA, but a combination of them. And I think that changing in philosophy for organizations to understand, it's like having one firewall port open is not going to be the thing that brings you down, it's going to be that plus a misconfiguration, plus a CVE, plus an identity that's highly privileged. All those in combination is the actual risk in the cloud. So I think that's one fundamental misinterpretation or misunderstanding that I see when people are thinking about cloud security. [00:07:03] Speaker C: So just going back to your point, you said there are a lot of security philosophies in terms of on Prem. What are they? [00:07:10] Speaker A: I used to work in a kind of more network security, which was applicable for both on prem and cloud. But one of the things that they would do is, you know, you've got your perimeter, you've, you've got your kind of motor around you, and, and regardless of if you configured your vms on, on the inside to be, you know, you haven't patched them appropriately or you might not have, you know, you might have some like host misconfigurations and things like that. At least you had the assurance that from a perimeter perspective, you've got everything locked down, you've got a firewall that you could be so specific about what people can, what could be inbound and outbound. You had that reassurance that yes, we might not have the cleanest operation inside of the castle, but we know our walls are pretty locked down. So I think that's a philosophy that I see. Many organizations going to the cloud thinking that you can have that maintained view. Like another one is internal reconnaissance on Prem. When you've got a Brett actor, moving laterally is really typically a longer exercise than what we see in the cloud. You know, they've actually got to work out and investigate. Okay, what can I jump from here to here? I see this in an internal ip in the cloud. Internal reconnaissance is easy because all of the application, or like the API endpoints for a cloud service, they're publicly documented. So what we see is as soon as they've broken that first part, they run a script that says, okay, I want to see all these permissions that I know that aws or the other clouds provide. I just want to see which ones I get a success back. So it's a much faster time from initial access compromise versus what we see on Prem. So in other words, the detection and response focus that we see in on prem, where you say, okay, let's wait till something malicious happens and respond to it has shifted a little bit in cloud. Like, we have to go to a preventative standpoint because. And a mitigation of blast radius because once they break in, it's so fast from, from when we say mission accomplished. [00:08:59] Speaker C: Okay, so there's a few things in there that, which is interesting, I want to get into a little bit more. So going back to the on prem sort of side of things, what do you think rattles your traditional on Prem? Hardcore fans of that sort of model, like, because, yes, I am sort of seeing that your traditional people sort of moving more to a cloud mindset, etcetera. But there's still some people out there on social saying like, absolutely not. Like, you know, on prem forever. Like, what are your sort of thoughts on that? [00:09:25] Speaker A: Yeah, I think the main component there is that, like, you know, fundamentally it's almost like moving your environment. You're going into a public sphere, right? You don't have the lock and key in the, in the basement of the building with a data center that, that you can, um, you know, have physically guarded. You can pull out a. You can pull out the cable in the particular portal physically. If something's really being compromised, you've got that completely abstracted. I think those types of general architectural changes is something that many engineers, particularly security practitioners, will just be like. You know, there's an element of control that you have on prem, comparably to what you have in cloud just from that physical standpoint. And that's kind of like, there's the objective view of that, and there's obviously like an emotional view of, like, I literally can see my machine, and if it's, if it's about to compromise and there's a, there's, there's malware that's propagating, I can physically turn the thing off. So I think that's one fundamental element that, that irks people about moving cloud. And I think the extra, like, obviously there's huge benefits with going into the cloud. I think the other broad one is the new paradigm or new domains that the cloud is powered, which is obviously identity. We don't just have network as a perimeter anymore, which you do on Prem. Identity is that second layer of perimeter when it comes to the cloud. And obviously it's a whole new domain, it's a whole new landscape. It's essentially a new supply chain risk, I suppose. So, you know, there's an upskilling. There's a, there's a knowledge area that needs to be upskilled for many on Prem. So I'm, I'm sure that those, those elements would be a reluctance points for organizations. [00:11:01] Speaker C: You made a great statement around having the control, which I get right. But then as I sort of zoom out of your statement, there is like, how much we're working today. Like, when I started working like 15 years ago, like, there was no one working from home, like, there was no laptops, there was none of that. Like, you had a desktop at a desk came in, you went. At that time, you had control over your people from like a security point of view, right? But now it's like, you know, you may not even see an employee that works there for like ten years because they're in some remote place that you just never see them anymore. So I feel like the control by default has sort of already been lost a little bit because the people aren't coming into the office like they did back in the day. So isn't it just sort of a natural progression that we're moving this way and losing the control, if you want to call it like that. [00:11:46] Speaker A: Yeah, yeah. I think with any of these evolutions, Krista, there's always going to be that kind of reluctance to do it. But the trains left the station here. There's kind of no going back. The obvious benefits of moving cloud are there. The operational kind of efficiency and development velocity that you can gain in this is. It's too good to kind of hold back. So I agree that, and from what I see in the market in Australia and New Zealand, for example, is most organizations are either fairly heavy in the cloud, accelerating very fast in their migration, or certainly having pretty core strategic initiatives to be moving to the cloud, primarily for those reasons. [00:12:27] Speaker C: So going back to your comment before around it's fastest to do internal reconnaissance. Would you say most people sort of understand that? Or is that again something that perhaps people get wrong about it? Like you mentioned before, like you write a script, hey, here's everything. Like, that's a lot quicker process than perhaps an on prem sort of approach. [00:12:44] Speaker A: Yeah. I think people still fundamentally misunderstand a lot of the cloud native type threats that we're seeing. A lot of the fundamentals are the same, of course, you know, break in, get to the sensitive data, you know, the high level stages of an attack, you know, whether you follow like the classic nist stages of an attack, they're still all there. But the methods or the how of what threat actors are doing has fundamentally changed. They're leveraging the cloud native endpoints and services, they know how to navigate them efficiently. Cloud keys and secrets, these four mechanisms that we're seeing threat actors use. And then you start to layer in AI, and I'm not talking about AI driven threat actors that are kind of enumerating things, but just knowing how to potentially compromise AI models and do isolation, breakouts and things like that. Threat actors are aware of these. And so once again, fundamentally, the stages and objectives are more or less similar, but the mechanisms have changed quite a lot and therefore our security strategies, from a preventative standpoint, need to shift a little bit as well. So I think in terms of market, obviously, there's a lot of good knowledge around it. I think people became much more cognizant of it, um, because of the amount of cloud native breaches that we're seeing. When you kind of like, think of from the attackers perspective, what's more likely if you're kind of doing the, you know, the Roi of like, where are we going to spend our time trying to compromise, doing an on prem environment and spending, you know, months and months, like trying to sneak in and move laterally and kind of compromise, or do we just go to the public domain and just start doing absolute brute force across all these IPs that we know a part of cloud services and then we know what to do once we get in. It's in my view, and not to oversimplify it, but you can see why that's a much easier target for them versus legacy on prem environments. [00:14:38] Speaker C: Okay, so you mentioned before, cloud native threats, what are they? Can you expand on that a little bit more? So people are sort of a little bit clear on what you mean by that? [00:14:45] Speaker A: Yeah, cloud native threats in my view is essentially threats that are targeted or attacked pipes that are targeted typically to cloud environments. So these are involving not just like network classic network application compromises, but using the cloud domains, particularly from a compromise. So I'll give you an example. So I think it was quite similar to the capital one breach. You've got a potential exposed API. They compromise that machine, they use a cloud access key, they compromise that key and they understand who owns that key, that user. And you're obviously familiar with the concept of identity and IAM within the cloud. This is kind of like the new domain in terms of where connections happen between services and resources. So what would happen is they'll identify this key, they'll work out who owns this key. And the key is just for those that aren't aware, these are the particular mechanisms for like an SSH key. If I want to connect an SSH into one of my vms in AWS, for example, I'll have a key that I'll use to access that. And then I might not have any high permissions organically, but within the cloud there's really nice mechanisms like in AWS, like the idea of assume role. I'm going to impersonate another role to give me admin permission and then I'll use those admin permissions to quickly create another machine or create another service or whatever it may be. So what threat actors are doing are essentially okay, I've identified this key, I've worked out the owner of that key, I'm going to see what that owner of that key can do. They might not organically have any high permissions. I'm going to see what I can jump. And that's what I mentioned with that fast internal reconnaissance, because all of those API endpoints that you can use in the cloud, in AWS or GCP or Azure, you've got that all publicly documented and then they can quickly go okay. Oh, this user can. Okay, what does this role, and this role, hey, this role actually is able to create an EC two. Oh, this one's able to delete something. This one can access a bucket, for example. So it's those types of domains. When I say cloud native breaches, which is essentially using the cloud services and mechanisms to move laterally and compromise an environment. [00:16:59] Speaker C: Okay, I want to sort of switch gears slightly and talk about your thoughts on USA cloud and AI. And I know AI, you know, that's a term that people are starting, their eyes are starting to glaze over, but that and AI are tremendous enablers that allow teams to quickly transform everything from development to operation. So talk me through this. What does this look like? Now, I know that I feel like I've been talking AI, like, a lot about AI on the show, but again, like, everyone has a different view. Right? So I'm keen to hear yours. [00:17:27] Speaker A: Yeah, yeah, absolutely. So, you know, once again, hopefully forgive any buzzwords as I, as I go through. Aih is a buzzword, but it's a buzzword with utility, you know, unlike potentially other buzzwords. So I view and look at people have different opinions, I talk to industry peers around this and some kind of feel like it's, oh, you know, it's like, it's like cloud again. The developers and engineers are just going to start playing with it and then ask questions later. You know the classic ask forgiveness, not permission. I actually have an alternative opinion a little bit. Obviously every organization's different, but I think because of the kind of rate and velocity of threats and risk we're seeing. And once again, particularly in Australia, but globally, of course, everyone's a little bit more aware that, hey, all cool technologies come with inherent risk. It's like containers and kubernetes. People went for it first and they're like, oh, okay. There's actually some potential risks that we need to be aware of in terms of setting it up. And then they'll ask the question after the fact. What I'm seeing with AI a little bit is everyone knows the power that it's got a, everyone knows the potential it's got. You've got boards, you've got CEO's saying, I want to use this. Except, hey, can we make sure we don't screw this up and introduce risk to the environment? So why I see security being an enabler here for this is because like for example, I work with, typically the people I work with in my capacity, Wiz, is cloud security teams, cloud infrastructure team. But we have a lot of AI security and data. I actually had a data science team from an enterprise organization in New Zealand reach out to me on behalf of their security team saying, hey, we really want to start using things like bedrock, OpenAI, vertex, AI from the respective clouds, even other non cloud provider ones like replicate or hugging face and things like that. But we're not sure how to secure it properly. We know how to use the data behind it. We don't have the skillset in here. Can you help us help them understand what's a best practice look like for AI? And so we engage with the AI, the cloud team, sorry, the security team who have essentially started to say, okay, we really need to upskill in this because once we've got the framework in place, the kind of pure, let's say, pathway for our organization to use AI, then security becomes not just like the kind of risk reduction engine within the organization. They actually become a top line contributor because of their ability to adopt AI faster in a secure way. They're the ones that are kind of, you know, I read an interesting article the other day. Like the top four blockers of AI adoption, there's legal parameters, you know, legal considerations take into account, there's privacy consideration, and then of the top four is security. And if you don't have the knowledge to do that, security of those AI services and the data on the Internet, then you're going to be much more reluctant in adopting it. So that's where I see if I'm leading a security team right now, I would be going up to my board and c suite saying, hey, I've got the parameters in place. We can start to adopt these for the broader business because I'm comfortable with our posture around this. So that's where I see it as an enabler. [00:20:39] Speaker C: Follow up question I would have is, do you think people sort of have the knowledge of AI? [00:20:42] Speaker A: Broadly speaking, I would say I'm speaking for myself there as well. Such an evolving space. You know, if anyone said to me that I fully understand everything to do with AI, I would be a little dubious. I would say, say is people understand the fundamentals, they understand how the outcomes it can present to organizations. And most importantly, you know, from a security perspective that I like hearing is people that are aware of the potential risks being where is the data going and how is the infrastructure being used in our environment. That's an opinion, of course, but it is an evolving space. [00:21:16] Speaker C: Well, people view AI as like a double edged thought. It's like, yes, we need it to be faster, more velocity you know, reduce costs, et cetera. But it's like, oh, there's like all these risks and legal and privacy concerns. How do we sort of find the equilibrium by we need to move forward as a society and you know, get up to speed with AI and understand it, but also be mindful then of the risk. How do we do that effectively? [00:21:39] Speaker A: Yeah, it's a good, like I'm an optimist at heart in the sense that like I think the, the outcomes of AI, you know, there's, there's some, you know, doomsday type rhetoric around and things like that. I don't share that opinion so much, but more or less, I think the way to navigate that is to do things in a strategic, not completely, you know, you're never gonna have zero risk when, whenever you're testing anything, but, but in a reasonable and risk friendly way. In other words, let's not just bring in an AI service into our production and start using it as our big data querying set because we just want to see what it can do. Make sure you're going through the classic, you know, some of the classic development philosophies of, you know, proper sandbox test, the new at non production testing and then start to go into production and make sure you've got the appropriate disclosures to customers that hey, this is an opt in service to start with. We're going to start to use these particular AI services potentially, but certainly you don't want to get bogged down in analysis paralysis of. But what about this scenario? What about this scenario? You want to keep moving forward as you mentioned. So I think there's always going to be that healthy balance. I think once again this is where you need high levels of collaboration between security and the data and AI teams or whoever's driving the AI initiatives to say, look guys, we really want to enable these services, we really see these outcomes. Let's not do the reactive standpoint of like, hey, we've built something and then security can see it in whatever system they're using. Hey, you've just exposed this and this, you've opened up this data or whatever security should be early on in the design discussion. They should not be after the fact in terms of when, when these applications are being built. [00:23:18] Speaker C: So what would sort of happen if companies were like, no, not really keen on AI? Maybe they're conservative in their approach, theaters massive risk and they just don't adopt it. What do you think sort of happens to those companies? [00:23:27] Speaker A: I mean it's obviously going to depend on business to business. There's probably some business operations that AI is nice for periphery services. It's not going to help our core business. That's fair enough. They don't need to adopt it. I think it's each their own to some extent. I wouldn't say by any means. It's like cloud and other more and other emerging technologies over the last decade or so. Just because you haven't adopted them doesn't mean you're going to fall behind. You could be a classic brick and mortar business that it just didn't make sense for you. Any company that doesn't take AI is going to lose out and be defunct in five time. And that's a great point. Going back to the hype cycle around it, potentially going to be a lot of toil in terms of wasted design operations because everyone was so hyped up. It's going to solve all our problems. There's going to be fundamental business decisions around this actually is not going to make a process more efficient. We're not going to get the customer experience that we thought we would. The AI is actually being detrimental to our customer experience and we maybe jumped at this and didn't do the appropriate testing ahead of the time. But once again, it's really hard to say. I think it's just an organization to organization decision. And I'd be focusing on not like how cool is this tech, but what's the actual outcome that in theory this would deliver before kind of diving in broad AI initiative, do you think like that though? [00:24:49] Speaker C: Like what's the outcome? Like what do we get from this? I mean, it's a great point, right? That's why I sort of, you know, I love running this show. Like, what do we get from all this stuff? Do you think people just get lost in the technology, the capability, rather than, well, if we adopt this, what do we sort of, what do we get from it, do you think sometimes as technologists, people like yourself live and breathe it? Perhaps. Maybe I wouldn't say get lost, but perhaps, you know, things are a little bit tainted, perhaps on your view coins. [00:25:14] Speaker A: Yeah, I think we're always at risk of being in the bubble of our domain. And, you know, whether you're an engineer, whether you're an executive, I think you've got very different views and you're obviously the product of your environment. So like engineer led organizations might say, hey, this is the coolest new tech, let's start playing around with it. And they spin up some project that doesn't deliver any without even speaking to a customer, for example, what's the actual customer outcome here? Are their lives going to be changed enough for it to justify the bottom line kind of thing? That being said, experimentation, R and D is still super important for every organization. And I understand and appreciate that you're not going to have the defined outcome when you kick off a project. But look, I think it's a mixed cure. I primarily work with organizations in Australia and New Zealand. We've got a team dedicated to experimenting. They're literally spun up to look at emerging technologies, and that's great if you're an organization of that scale to do so. But I think if you're a more lean organization, you've got to be collaborative. Break out of your silo in terms of the data and AI team, hey, I'm going to talk to security and see if there's anything that they may benefit from this initiative that we're driving, and then spreading around and saying, do our customers actually need this? Is this going to make a tangible, different top line and of course, bottom line as well. So once again, it's an organization to organization question. I think there's a healthy balance that needs to be had. I'm an outcome driven person. Typically when you start to spin cycles on projects of that magnitude, you know what I mean? [00:26:56] Speaker C: So going back to the knowledge side of things now, I've got a lot of people that come and talk to me and they ask me a lot about AI and security, et cetera, executives. So how can people who perhaps, well, we need to get a little bit more knowledgeable on this and start to look into it in a way that makes sense. I mean, I calculated risk, right? Not just diving headfirst and seeing what happens. How would you sort of approach people to address that within their company? [00:27:22] Speaker A: Yeah, one of the things like, I hope I'm not coming across as like a, you know, a fear monger in terms of AI. The beauty of AI, in a lot of ways, it fundamentally kind of going back to my earlier point around like cloud, for example, the fundamentals haven't changed. I had a executive say to, like a CTO say to me the other day, effectively, my view of AI is its a good opportunity for organizations to say, hey, and particularly security leaders to say to the non security parts of the business if we want to as an organization adopt AI more. Its a good opportunity for them to say, hey, well, lets eat our vegetables a little bit here. Lets get the foundational setup of our cloud environment up because once again, as I mentioned, were not reinventing the security risks here I look at AI as just another platform, an extra platform on top where effectively you got to make sure you've got the underlying infrastructure. So you have to have visibility, you have to have the visibility of the services that you're using as well as what compute they're running on. And more importantly, you've got to have technologies in place that can say you've misconfigured this. This is public, this has a high privilege attached to it. It's got sensitive data. It's nothing new that AI is introducing. It's the fundamentals. These are the kind of levers that security leaders can use to help with that AI adoption. [00:28:40] Speaker C: Okay, so I want to zoom out a little bit more. And as we know, you live in Australia now. Australia has its fair share of bridges over the last recent years. So maybe talk me through how you've seen the threat landscape change. I'm keen, I'm keen to get into this. [00:28:57] Speaker A: Yeah, I mean Australia, you know, there's, there's a few themes I would say, that are kind of coinciding. One, Australia is a fairly early adopter and more and more organizations have significant workloads in the cloud. And with that, there's obviously a broader attack surface when it comes to cloud related workloads. And as I mentioned earlier in the call cloud threats, the growth of attacks there is becoming more or less exponential over the past few years because of that rate of return that attackers can get. It's one simple misconfiguration from developers that have accidentally forgot to close down an API or to delete or abstract sensitive data. These types of misconfigurations have made it a much easier and target for Australia. In terms of the threat landscape, I think Australia were obviously a fairly advanced economy. Weve got significant organizations here. Its a very attractive target from both nation states as well as just classic hacker groups like we've seen with a lot of those significant breaches. So I think just as a general target from a monetary perspective, Australia and New Zealand are obviously quite high on the list, along with the likes of Europe and the US. Of course, in terms of the types of threats, once again, we've seen much more organized attacks and very intelligent and cloud aware attack mechanisms as well. So I think that the landscape, broadly speaking, is we've got a attacker base that are very aware how to compromise cloud environments and are very aware of the types of organizations that Australia has and the potential types of impacts that they could have. Industry, I should say. [00:30:38] Speaker C: So what does it sort of mean for the companies now, like, as you know, like we're trying to constantly get our head above the water, do better than the cybercriminals and, you know, it's difficult. Right. So now you're saying they're more cloud aware, which means that companies need to be thinking even faster. Now, again, with everything you've just explained, that things are running a lot with velocity now than they ever have before. So what does this sort of mean now moving forward, as we've talked about, more companies are thinking about cloud first, et cetera, adoption to it, acuity within the cloud, etcetera. So what are your thoughts then on how companies can, you know, ultimately not feel that they're the victim of, you know, being breached? [00:31:20] Speaker A: Yeah, and I think it comes to a few things you need to introduce. Like one, Australia is a large country, but we've got a huge skills shortage when it comes to cyber, and that's cyber generally. Then you start to think about cloud security expertise. There's a big shortage. I think the, the government alone has something in the order of, like, can't remember what the statistic was. It's, it's in the thousands in terms of skills deficit when it comes to cyber in Australia broadly, I think it was a report from last year or late the year before post a lot of those significant breaches. So we're not flooded with resources and expertise locally. We need to go down. And this is broadly, not just Australia, just in the world generally. So you start to think about in the emerging technologies, how many experts are there in kubernetes security and then AI security. So what that means for me is, and it's just generally speaking, is we need to have pragmatic security, in other words, really focused on risk and not just alerts in isolation. You know, we've got to be pragmatic about what is the likelihood of this being compromised and what is the impact. And when you have organizations shifting their mentality from here's all these alerts, anything that is of a, you know, significant severity, that that's a high level priority, they often are one dimensional. And it kind of goes back to the top of the call when you ask me what am I seeing in terms of cloud security? And the misconceptions is people think cloud risks are one dimensional. You've got to be bringing in multiple different points of telemetry to say, okay, this is actually not just a public machine, it's a public machine with sensitive data. So you've got your likelihood and your impact. So pragmatic security means we're never going to have a hundred percent patched environment. We're never going to have an environment with zero Cve's or zero misconfigurations. There's always going to be inherent risk. So it's about prioritizing both our security team's time as well as the application and infrastructure owners time to say these are the things you need to focus on first because of the fact that these will have the actual most pragmatic reduction in risk. And then the other layer on top of that is because of the impossible outcome of having zero risk in the environment. It goes to that preventative measure that I mentioned earlier. Preventing the impact of a compromise is imperative for organizations to say, look, we've got, we're always going to have something that might be externally exposed. Let's make sure that it can't go anywhere having the appropriate isolation and segmentation. So it's about almost preparing as an organization. I don't want to have my last line of defense as my only line of defense to respond to an attack after the fact. Lets be proactive and preventative. And then once you get to that nice benchmark of configuration, then I really see a nice theme of, and once again give the buzzword of shifting left and introducing the preventative controls in the guardrail in a pipeline or in guardrails in the developers lifecycle. Because also theres a huge order of magnitude value in doing that earlier than after the fact. So I suppose the, the broader answer there is pragmatic outcomes and accepting that you will have risks, but focusing on where the highest ones are. [00:34:35] Speaker C: Okay, there's a couple of things in there, which is interesting. So why do you think people think it's this one dimensional? Where does that sort of come from, that thinking? [00:34:42] Speaker A: I think it's just the initial generation. So like when cloud first really started growing in market, the new layer that was introduced was of course cloud services. So your pass and I our services. And naturally the first thing that the cloud providers as well as third party providers introduced was, okay, here I'm going to introduce configuration suggestions for you. Simple things like hey, you've just created a VM in the cloud provider, you've got the disk unencrypted, you should encrypt that. Similarly like hey, you've just created an s three bucket. Did you know you had that public? You should not have that public. Now often these are inherent little risks, but often what they are is just like it's a compliance state. So what they would do is and it's like the classic example of a VM with a public ip address in the cloud, you'd be forgiven to think hey, this machine is public. That's actually not necessarily true. So it's not so much a false positive, but it's a false risk, just a compliance view in my opinion, because what you might have, the VM might have a public ip but it might not actually be behind an Internet gateway. So there's no actual exposure. And then you see the broader impact. If you send that to a developer saying hey you've got a public machine, they'll say no I don't, it's not behind any. And then that's where the friction starts to be introduced a little bit with those teams. Whereas if you go down the other path of like regardless of whether it's got a public ip, hey, I can prove that this is reachable through the entire infrastructure point. Theres multiple dimensions to that. So therefore its a real risk opposed to that. So I think its just a natural evolution. To go back to your original question, why do people think its one dimensional? Its because that was the tooling that was first introduced when it came to clouds, how to configure things correctly, opposed to looking at the other elements such as like what about the identity thats attached to it, what about the data that it can read, the blast radius exposure. So I think it's just a maturity of the market and changing from that mindset. [00:36:42] Speaker C: I touch on quickly around the preventative side of things. You're right. Like you know, with all the obscurity strategy, et cetera, the government's coming out and saying, companies are saying that. The part that gets me though, Matt, is one, a company's been breached. It's like, oh, we did everything we could. Well obviously you didn't because there's a breach. So are people taking it seriously though? Because anyone can say I work in media, right, it's always going to be, we've got to get the best image out there. I can say that because I am asking organizations for statements, post their, their breaches, okay. And some of the pushback I've had is part that really gets to me is me, oh, we're going to share about, you know, our failures. I've asked, people don't want to share. So I don't know where this whole theory is about sharing and letting other people know that's false. People don't want to talk to me when I've asked them the question. [00:37:28] Speaker A: I'm seeing a shift hopefully in the industry about sharing those best practices. I think. I think people are. You know, there's always going to be that natural stress and anxiety that comes after a significant breach. And obviously no one, people are reluctant to admit potential fault. Once again, whether it's a conscious error or whether it's an unconscious error that they've made in the fundamental setup of their infrastructure, I tend to agree with you. I think a sharing between organizations, particularly giving and many organizations have done this well, doing a really clear post mortem of what did we do wrong, and here are the things we learn from it. So other organizations don't repeat the same mistakes I think would be great. But it's an evolving industry and hopefully organizations become more comfortable in admitting potential errors and the broader industry is better for it. I will say there's many groups that I've joined, whether it's SISO groups or general security practitioner groups. And often in these discussions, it really does become quite a healthy conversation of like, hey, we really screwed this part up. I really recommend doing X, Y and Z. It really helped our operation. So I am seeing, particularly in a lot of the conversations im in Carissa, that organizations are becoming more willing to share their best practices as well as their potential faults. [00:38:48] Speaker C: So, Matt, do you have any sort of closing comments or final thoughts youd like to leave our audience with today? [00:38:54] Speaker A: Im really optimistic about the industry. I think certainly across, you know, the clients that I work with in my capacity at Wiz is I'm seeing a really nice shift of organizations, particularly at that executive level, who really need to be the ones kind of driving these initiatives to bring these teams together. Security, you know, everyone's in security. Everyone has to be security conscious. You know, we're all in that same team. And to facilitate this, what I'm seeing to be a really healthy way to do it is for security and application teams to be cohesive and more importantly, like on the same team, the security team are not there to make your life difficult. So it's about coming together and having that shared source of truth. And then more importantly, security then becomes that internal expertise engine, the consulting function with an organization to say, hey, we've got all these best practices for you to do. We don't need to hand hold and we don't need to wave the stick around, but we're here if there's something that you really need to escalate. So one of the things I suppose, just to summarize is I think the shift, particularly with cloud and AI, I think is going to be a real driver for the type of culture within organizations is for security to be part of those early stage application and application infrastructure design cycles, and therefore theyre going to be able to build faster with security embedded, meaning that youve got a much more efficient way and you dont have to stop the whole show to patch an incident, but you can prevent it in an earlier way. Bit of a broad statement, but hopefully the takeaway is that I suppose it is possible. I've got many customers that have a really smooth self service operation where more than the majority of the users of the tool, like such as Wiz, the majority of our users aren't actually in security. Carissa. The majority of our users are actually developers, DevOps and infrastructure teams. So it is possible takeaway. [00:40:50] Speaker B: This is KBcast, the voice of cyber. [00:40:54] Speaker C: Thanks for tuning in. For more industry leading news and thought provoking articles, visit KBI Media to get access today. [00:41:03] Speaker B: This episode is brought to you by MercSec, your smarter route to security talent. Mercsec's executive search has helped enterprise organizations find the right people from around the world since 2012. Their on demand talent acquisition team helps startups and mid sized businesses scale faster and more efficiently. Find out [email protected] today.

Other Episodes