March 27, 2026

00:41:44

From Elastic{ON} Sydney 2026 – KB On The Go | Mandy Andress

From Elastic{ON} Sydney 2026 – KB On The Go |  Mandy Andress
KBKAST
From Elastic{ON} Sydney 2026 – KB On The Go | Mandy Andress

Mar 27 2026 | 00:41:44

/

Show Notes

Data is exploding. Environments are getting noisier and the line between observability and security, it’s basically gone. Search isn’t just a feature anymore, it’s infrastructure. It’s how you see, how you detect, and ultimately how you defend from ai, power detection, engineering to unified visibility across logs, metrics, t races and security telemetry. We’re officially in a world where if you can’t search it in real time, you can’t secure it.

This bonus episode features Mandy Andress, CISO at Elastic, live from Elastic{ON} Sydney 2026. As Australia navigates its unique, sector-led approach to AI regulation, Karissa Breen and Mandy Andress explore the challenges—and opportunities—facing CISOs on the front lines of the agentic AI revolution.

Mandy Andress is currently the CISO of Elastic and has a long career focused on information risk and security.‬ Prior to Elastic, Mandy led the information security function at MassMutual and established and built information security programs at TiVo, Evant, and Privada. She worked as a security consultant with Ernst & Young and Deloitte & Touche, focusing on energy, financial services, and Internet technology clients with global operations. She also founded an information security consulting company with clients ranging from Fortune‬ 100 companies to start up organizations.‬

‭She is a published author, with her book Surviving Security having two editions and used at multiple‬ universities around the world as the textbook for foundation information security courses. Mandy also tested‬ and reviewed information security products for multiple publications as well as serving as the author for the weekly InfoWorld security column. She has been a sought after expert in the field, speaking at signature‬ security conferences such as BlackHat and Networld+Interop. In addition, she has taught a graduate level‬ Information Risk Management course for UMass Amherst in the College of Information and Computer‬ Sciences.‬

Mandy has a JD from Western New England University, a Master’s in Management Information Systems from Texas A&M University, and a B.B.A in Accounting from Texas A&M University. Mandy is a CISSP, CPA, and‬ member of the Texas Bar.‬

 

 

 

View Full Transcript

Episode Transcript

[00:00:10] Speaker A: What's up everyone? It's KB and I'm on the go at Elasticon Sydney for 2026. Data is exploding, environments are getting noisier, and the line between observability and security, it's basically gone. Search isn't just a feature anymore, it's infrastructure. It's how you see, how you detect and ultimately how you defend. From AI powered detection engineering to unified visibility across logs, metrics, traces and security telemetry, we officially in a world where if you can't search it in real time, you can't secure it. And I'm here talking directly to the people building that backbone, like Mandy Andrus, Chief Information Security Officer at Elastic. Stay with me as we dive further into the conversation with Mandy that you won't get on stage. This is KB on the go from Elasticon Sydney. Let's get into it. [00:01:04] Speaker B: So Mandy, welcome back. Today I want to discuss with you what's happening with SISOS on the front line of Australia's agentic AI transition. Yeah, I know you've got an upcoming conference, a lot going on, a lot to be presented, but I really perhaps want to start there. Let's paint the scene that Australia has opted for sector led oversight instead of heavy AI legislation. So what comes to you mom, when I ask that question? [00:01:28] Speaker C: Well, first, thanks for having me back. Happy to be here and chat with you again. CISOs and AI and agentic AI, top of mind for all of us. And it's bringing a lot of discussion on what is that balance between moving fast, adopting AI across the organization, but doing so in a safe way. But in a safe way in an environment and with technology that is very immature. And so it's finding that right balance. And from a CISO perspective, because technology solutions within the AI space are pretty immature, it's focusing on bringing us back to focus on the fundamentals of security and ensuring that, for example, on an agentic approach, ensuring that there's both guardrails on what the agent programmatically can do, but also guardrails kind of second level of guardrails on what's the access that agent has. So even if it wanted to try to do something different, it wouldn't have the ability to do so. And then of course the more challenging aspect are those agents that continue to learn and evolve and try to change their own permissions and hack other agents to get them to do what they want. It's a very quickly evolving space, one that is both exciting and, and sometimes scary, but it's a fun world to live in these days. [00:02:48] Speaker B: So would you say obviously you're based in the US but you're looking at different regions. What do you, you said before around the maturity, how does Australia sit against like perhaps the US And I know that it's interesting because depends on who I ask. I get like different answers. Some people say that it's actually more mature here than it is in the US So I'm curious maybe to understand what that means from your perspective. [00:03:09] Speaker C: From my perspective, read through the AI programs that Australia has compared with the US Focus compared with EU AI Act, I see Australia really sitting in the middle of the paradigms. So the EU AI act being very prescriptive, being very prove to me first that everything is safe and secure before you can use it. The US is yes, here's some general standards, but organizations it's on you to find the balance and manage the risks. And within Australia I find a good balance of overall, here's some high level guardrails, but not being overly prescriptive and still allowing that innovation and allowing that speed from an organizational perspective and allowing organizations to make those decisions for themselves and really framing that balance as a way to help Australia move forward quickly in the AI world. [00:04:07] Speaker B: Such interesting that you say high level guardrails are not like super prescriptive. So how are companies in Australia really framing that then do you think? There's a lot of gray area that's like, oh, okay, well let's just figure it out for yourself, let's see what happens. I know people are saying it's still very early days, so it's not like they have a blueprint to look back on to know is it still maybe in more those formative sort of place at the moment because we don't have a lot of data to really go off as of yet. [00:04:35] Speaker C: It's still early days. It's definitely still very formative in the AI space. And the technology is moving forward and changing so rapidly that even if we have an approach in place today, it's likely not even applicable three months from now, if not possibly tomorrow. Based on how things are evolving and taken with that, certainly heavily regulated industries so usually think financial services and others regulations are not currently written in a way to really support AI. So those industries also struggling to find how do we maintain our compliance with the laws and regulations that they must follow while adapting and taking advantage of any new technology or benefits that applying AI within their organizations would make sense. And a lot of that comes back to kind of the proof. So the visibility, the transparency, understanding, if you have generative AI. If you have agents acting autonomously, why did they do that? What's the decision tree, what's the logic that they followed? And being able to have that transparency so you can go back and be able to explicitly answer the question of who did what or why was this happening? And do you need to make adjustments, do you need to retrain models and things like that? So having that overall visibility and transparency is core to moving forward successfully. [00:06:04] Speaker B: And then Mandy, given your role, just focusing on dead on SISOs just for a minute before we move on, what are you sort of hearing is like the general chatter or concerns or apprehension? I know it varies, but is there any sort of common trend that you're hearing here in Australia? [00:06:23] Speaker C: In general, there is a lot of both skepticism and concern. Skepticism in that we understand there are significant benefits that we can both see in our own roles and within the organization by applying AI technologies. But is the technology there today, is it going to be able to do that? Is it going to be able to bring more challenges than solutions right now? And what do we need to do about that? So that's one, I think fear in the respect that organizations, the organizations we work for want to rapidly implement and adopt AI, but how do we do that within our roles and allow the organization to do that safely? Key example, a lot of the solutions right now don't have what you would typically define as enterprise level controls. So think identity and access controls. MCP servers. Connecting to MCP servers is a big focus to bring in more data and have agents access different types of data. Oftentimes the MCP implementations are have zero identity and access controls. You connect with an identity and you have access to all of the data within that MCP server. That scares us from a security perspective because you may not want that agent or that individual to have access to all of the data within that environment. And so it's looking at how do we balance that if the technology solution providers aren't going to include it? Do we add layers of controls? Do we add different elements within our kind of AI stack that allows us to create that control structure around it? So it's a lot of within CISOs, I'm not seeing specific concerns that we shouldn't be using AI or shouldn't be looking to apply it. It's just how to do our jobs successfully while allowing our organizations to move forward with AI adoption. [00:08:14] Speaker B: So you just mentioned before layers of controls and I know again, we're all starting to figure this out. Would you say because trying to implement layers of controls. Does that then slow down the premise and of a. Because so many people are like, we're using AI, so people become more effective and more efficient at their job. But then do you see that then as like, counterintuitive? [00:08:33] Speaker C: It could be, and I think it's certainly counterintuitive. If you look at every new technology, implementation or application of AI in an organization as its own discrete effort in needing its own control set, yes, you would be working, I think, much more slowly than the organization wants. For me, the way I look at it is I try to extrapolate, look at the bigger picture and understand what are the key controls, irrespective of AI, of it being a specific AI technology or AI implementation, what are the key controls I need to have in place that allow my organization to leverage AI safely? And if we look at that, we get back to the standard paradigms that we've been talking about for years in security, least privilege, zero trust. So it's really forcing us to rethink how do we implement those principles well in our organization now to help us support AI. [00:09:32] Speaker B: So I was recently talking to Ocizo and they were sort of saying, like, it's just. It's at the identity, like it's all about. That's the biggest risk. Then at the moment, would you agree with that? Because, like, obviously run this podcast, I'm speaking at people like you, like every week, and then every time I speak to them, like, that's a really. That's a big problem. Where would you say fundamentally the biggest issue that you're sort of seeing or what's top of your mind, given your pedigree and your role, what's the worst the identity, like level or other people are trying to claim that it's other things like policies, et cetera, or lack of leadership. So I'm just then curious to understand, like, where you sit on that front. [00:10:08] Speaker C: For me, it's identity. Identity is. It's the control plane of AI. It's the control plane of agentic AI. It is where threat actors are focusing. Because we don't do identity well today. We have accounts that have been compromised. We have whether it's accounts for humans being passwords or it's API keys secrets. If you're looking at microservices and more cloud and SaaS implementations, and we're just now saying, oh, we're going to add all of these agents and we're going to have this exponential increase in the number of identities that we have, but we're not going to change our approach and how we're managing them. We're creating a disaster for ourselves. And that's where I often talk right now, that it's going to get worse before it gets better because we're going to start taking advantage and using agents and agentic AI and deploying them through our organization without making some of the necessary changes in how we manage identities. And so we'll have challenge. We'll have agents that are doing things we don't want them or expect them to do because they have too many permissions. We're going to have, as we learn what AI, what generative AI and what agents are able to do, it will help us reframe or help organizations reframe how to approach identity. So that's when I go back to traditionally it's been if you try to do least privilege, someone can't do their job, they can't do what they need to do. So we just continue to open up access until they're successful. If we do that in an AI and agentic world, we could create some very significant challenges. So it's now going to get back to we need to give the agent only what it needs and it has its own identity. That identity should only have the access that it needs to do its job. And we will really need to be able to figure out what that access needs to be. It won't be good for organizations to say, oh, the agent needs to do this now. I just need to open up the broader permissions. Because the more permissions you give an agent, the more it figures out what it can do for itself. Because agents are non deterministic. The biggest challenge is as humans, we aren't necessarily always able to anticipate what an agent might decide to do. If we're not providing strong guardrails, then we could have some consequences and some impacts that we don't anticipate that could be very, very serious for an organization. [00:12:27] Speaker B: So Mandy, when you say give the agent only what it needs, how do you figure that out? So in my experience working at a bank, I remember getting calls from like the identity team saying, hey, you've requested access to this system, why do you need it? Justify yourself or you've got too much access. Now we're going to decommission your access, which obviously that's a bit more manual. This is going back a while ago, but how do people determine that now? Because I feel like even back then people couldn't determine it manually. And so are we using the manual effort then to apply that to the agents and then maybe it's not going to be 100% accurate. There's going to be gaps. [00:13:02] Speaker C: Welcome to the conundrum of the world of AI and security organizations. I say yes to all of that. It is, I would say in a kind of go forward approach. So if it's an organization that's implementing new systems and AI, they'll have a better path because they'll be able to define upfront what those axis and what those roles should be. It'll be very challenging for organizations looking at legacy technology and legacy systems where maybe it's systems that were developed many years ago, they don't have all the detailed documentation. The folks that built those systems are no longer in the organization and they don't have an understanding of what exactly the access roles and the permissions granting. And so it's getting to a lot of testing. It's getting back into the visibility of tracing and reverse engineering some components. It's keeping a lot of human in the loop because you don't have the detailed understanding necessarily. So just minimizing the pure autonomy of agents to help balance that risk. It's definitely another pain point. [00:14:02] Speaker B: So do you think people. I hear what you're saying because if I just focus on a bank that's got heaps of legacy tech, lots of systems and it's like there's no documentation. The guy that used to work these left 20 years ago. We can't figure it out who's a system owner. We don't even know how people are going to sort of like weave their way through this maze when companies now are getting so competitive to be like, okay, we need to start leveraging agents because if we don't, our competitor is and I'm seeing it now more and more that companies that are smaller that overturning big players because of how competitive they are faster. So like even development teams are being pressed on to say hey, we need to do releases faster and we just need to get stuff out the door to become competitive. So how do people balancing this And I know it's not an easy question to answer, it's just more I'm keen to sort of hear what's in your mind. [00:14:46] Speaker C: A lot of organizations are looking at that as human in the loop and ensuring that there's still a pause point or a place where there's someone that can look and make sure yes, before we take this explicit action, it's okay. Sometimes that's not sufficient in that processes need to move faster. So this is where I See some organizations creating agents that are managers of other agents and using agents to look at the be that kind of human agent in the loop and looking at what is this agent doing and is it doing what it should be doing and having a broader ecosystem of agents that are working together and creating that, that infrastructure and creating that visibility and decision tree. [00:15:31] Speaker B: So can I ask you more of a rudimentary question just based on behavior? I was in a meeting yesterday and I was like finding, like this is not as good for a particular thing. I was talking to the person about, I'm like, why do you think that is? And they said, AI, kb, it's all about AI. People are getting lazier even to, like people can't even read a book now. They've got to do the summarized version. So going back to what you were saying, do you think that even if there is a human in the loop, there's still going to be relying off some form of AI to give a general consensus senses of what's happening and then try to make a decision off that? Perhaps I'm just looking at behavior. And now that even people's brains are atrophied, they can't remember things like they used to. [00:16:09] Speaker C: And for me, it's not so much that we can't remember things. To me, it's we are now operating at a scale and a speed beyond human capacity. So even if we wanted to remember and we wanted to do things manually, we wouldn't be successful. So bringing it back to, as a ciso, looking at security, massive amounts of data coming our way, massive amounts of log events, security events, how do we make sense of that? In both a pure volume of data that we need to analyze, but also a rapidly expanding threat landscape. One, we can't keep up with the threat landscape of what we need to be looking for, and secondly, we can't process as a human all of the data coming at us. So leveraging AI to help us make sense of the data is one path of it. The analogy I often make is to automobiles and driving. And used to have my dad growing up, he was doing all sorts of things on his car and it never really went into a body shop or automotive shop. He was able to do everything, much to his dismay. I know absolutely nothing about my car and I know how to drive it and I don't really know how it works. So if something goes wrong, I take it to an expert. And I think that if we look at it from computers and evolution, you know, I used to build my own computers early in the PC era, I don't do that anymore. If I'm a computer, it works, I know how to use it. And I think we'll see a similar transition with AI and a similar evolution. There will be those experts that understand the underlying workings, but those of us that use it will know how to use it in former jobs. And we'll see significant, I would say unknown, but significant transformations in the workforce and in the world over the next few decades. Should be pretty exciting. [00:17:57] Speaker B: And so then just to just clarify, so on the using AI, perhaps with distant from a behavioral point of view, not necessarily about remembering, that was just an example, but would you say that people are still going to get to the point where they're just getting their summary and they're not going to read it word for word and just going to plug it into some LLM and get a bit of a high level understanding what's happening? Because it's a lot more, it's more volume. There is that human in loop, but they're like, hey, I've got all these tickets coming in that I need to action. I need to use a bit of AI to sort of increase that process. So do you think that will still be there or do you think people will go through quite meticulously, much to your point, before using the analogy with your dad in the car and understanding the mechanics of how it works will [00:18:37] Speaker C: certainly be leveraging AI. And I actually think using AI in that way will be helpful. From the advent of the Internet and social media, our attention spans continue to decrease, we skim things, we don't focus large amounts of time. I think AI actually helps in that it's able to give us a better summary. It's able to potentially pull out the key messages instead of us just skimming it as humans and thinking we're pulling out the key messages. Using AI as a tool to help us get a better sense and just from a productivity perspective, there's lots of, whether it's personal workforce, agents and things that are looking through your email making sure you're not missing any actions or any specific call outs, just because as your inbox grows significantly, you might lose track of things and just having something that helps, hey, you have this email that says you need to get back to them on this. I think it can be much more helpful than we think it can be. [00:19:28] Speaker B: So given what we just spoke about, would you say as a result of what we sort of just discussed, does that effectively make CISO the country's de facto AI sort of regulator like what, how does that look, would you say? [00:19:42] Speaker C: In most organizations, It's a combination of the role of the ciso from information security, cybersecurity infrastructure, control environments, legal from a policy, a regulatory, often privacy resides within the legal organization. And then the third component, IT in general, technology, of what is the technology stack that our organization wants to use? How do we want to leverage new technology into our existing systems and infrastructure? And it's for me, those three areas and those three roles working in close partnership in helping organizations navigate what is the kind of AI challenges of today. [00:20:23] Speaker B: So many people say to me that like, oh, the size of so many things to do and now we're sort of adding to it. How do they feel about that? It's like, okay, we've got a new stream of things that you need to sort of oversee and look after and somewhat be responsible for. How does that sort of sit then with them? [00:20:39] Speaker C: And that's there are increasing and expanding mandates for CISOs. And that's where I always take the step back and try to look at the bigger picture. If I just take everything that's coming into the CISO organization, yes, it could be kind of unwieldy and overwhelming to look at each individual thing, but if I take a few steps back, look at it from a 50,000 foot view or so of just what are the key things that I need to be focused on in the CISO that will help address these major areas. And so it's looking at, and what that generally comes down to is one, fundamentals are key. Whether that's staying current with patching, whether that's understanding assets and overall inventory just goes back to the things that we've always said in security are important, but are hard and are boring. And so we always look to a new technology, new solution that can help us solve this problem. But we're just adding kind of band aids around things. And now I think with AI and the speed and scale and overall complexity, it's going to bring us back to, we're going to have to refocus on those fundamentals and we're going to have to figure out how to solve some of those challenging problems. I think asset inventory being a key one, needing to look at not just now, what's your physical inventory, what's your virtual inventory as it relates to cloud? What's your agent inventory? Agents are now assets. How do you know what agents you have in your organization? What are they doing, why are they doing that and having that whole space of things. So I think it's not so much individual things coming our way. It's how we. How we take in those individual things and build them within the broader ecosystem of what we as CISOs are responsible for. [00:22:26] Speaker B: And then we mentioned like patch management. People have been talking about doing patch manager properly for like 20 years, and then people still can't do it. So. And it's not as easy as it looks in the book effectively, or now the summarized AI version of how to do it. So how then, if people keep talking about it's the basics, is the basics, but then we're adding on all these other complexities like agents, what are they doing, how are they doing it, why are they doing it? All the things you just mentioned, does it create that? You mentioned the word operative word before. Conundrum. Is this just blowing that conundrum out now? Because we can't even do the basic stuff and now we've got all these other complex sort of tasks and things going on. What happens now, really, with patching specifically? [00:23:04] Speaker C: Part of the challenge has always been what do I really need to patch? What does my broader environment look like? And so we've talked a lot about defense in depth over the many decades in security and having layers of controls to where if one is bypassed, theoretically you have one or more additional controls in place to prevent something catastrophic from happening. Patching is one of the oftentimes patching is the deepest control in an environment that you have multiple layers ahead of that. How do we understand what those are? A couple things that AI is really able to help with is one, just from a pure application perspective, there's reachability analysis and just understanding the code and how an application operates to know, yes, it may be vulnerable to the specific application vulnerability, or at least it looks like it might be. But is it really? Or is it the way this application's running? These 10 things have to happen before you could ever exploit this patch. And so the risk is lower for that organization or if you like, in it more from an infrastructure perspective, you have to get through five controls before you could potentially get to exploiting this specific missing patch. And AI is very good at that. Much better than humans in going through and trying to find what are those paths that you could exploit and take advantage of. And I think that'll give us as security practitioners and path perspective. Exactly. It's. Yeah, more detailed attack paths just to help us better understand where to prioritize and how to prioritize, because organizations look at, oh, I have 100,000 missing patches. You're never going to just Deploy a hundred thousand missing patches. It's how do I prioritize, how do I understand where they need to be implemented, how do I understand where there's potential interruptions or availability or kind of production environment issues and where do I have other controls and compensating controls in place to help balance that? It goes back to what I talked about earlier is having highly complex environments and as humans not able to put all the pieces together and able to leverage AI and AI technology to help us see that picture in more detail. [00:25:17] Speaker B: And so just want to move now on to. We've spoken a lot about the advancement of AI. Good and bad people have different versions and we're talking about the conundrum which is obviously accelerating AI adoption. And now you've heard people say, well, we're defending AI because we're being attacked by AI. So how do you sort of see this playing out now? Because now I feel like people are even saying in my interviews like, oh, it's really good, but we really need to slow down. I was in a conference in Canada like two weeks ago and they're like, oh, this is really good, but we really need to slow down because no one actually really knows what's going on. So I'm then curious to understand how, as I mentioned before, companies still need to adopt it. We're seeing big organizations invest a lot of money into AI. People should be focused on more strategic tasks rather than manual, monotonous, sort of labor intensive things that they're doing. So I totally understand. But then how is this going to play out moving forward? Is it that the bad guys are going to take two steps forward, we take one, or we'll be on par with them moving forward. I'm just really curious because again, depending on who I ask, I get very different answers. [00:26:23] Speaker C: I think a couple different things are going to play out in that space. Or one, we can't go back as much as we may want to do things more slowly between demands of end users, demands of investors, demands of defenses against what threat actors are doing. It's moving forward and we need to figure out how to work within the speed that it's operating. And in doing that, again, I think in the short term as defenders, from a security perspective, I think it's going to get worse before it gets better. Threat actors are quickly learning how to expand their use and how they leverage AI technology into events and creating incidents. AI technology itself continues to improve to where a year, year and a half ago it was better. Phishing messages and very, very targeted phishing campaigns than it was self propagating and self morphing malware. So it could adapt quickly to the environment it's in. Now AI and agents are able to truly act as a threat actor, as a hacker, and get into environments and pivot and understand the context and do everything that a kind of a red team threat actor would do, but do it in potentially minutes versus hours, days, months. And so how, as security practitioners, we deal with that one, it's going to lead to a paradigm shift in what security programs look like and how security programs operate. And that'll happen over time. That'll happen probably over the next. I would anticipate within the next five to 10 years, we'll look back and what we know of the security program today will look very different. So that's one piece of it. The other piece is as we learn how to better leverage AI technology within our organizations, the key thing that we're going to build for ourselves is context. What we miss today from a security perspective is that as practitioners, we often don't understand the full contextual picture of our environment. Between what does that infrastructure look like, what are the employees doing, what is critical in our overall environment, and just having that holistic picture and what the combination of the general technologies we have available to us today and what AI brings to us is we have massive amounts of data, and now we have tools and technology with AI that helps us make sense of those massive amounts of data. And what are the patterns, what are the behaviors that we're seeing? What are the trends? What questions should I be asking? Oftentimes we don't even know what questions we should be asking because we haven't necessarily anticipated all the types of behaviors and activities that will go on. So what I do see in the future is that the benefits will shift to defenders because we will have a full contextual understanding of our environments. And so we'll be able to very rapidly respond, we'll be able to very rapidly adapt, and we'll be comfortable doing that more autonomously in the future. And so when a threat actor tries to do something, yes, a lot of comments about it'll be agent to agent, threat actor and defender. And yes, we will get to that. It will be agents when trying to attack an organization, and when changing controls or morphing to prevent that attack, we will get there. But by having the full contextual understanding as defenders, we will be able to react at machine speed, which is what we're not able to do today. We still have to have humans in the loop we still have to bring in because we just don't have the ability to pull that full context together. And threat actors, that's what they pull forward. They're able to. And taking full advantage of whether it's kind of open source, public information, they're able to get into an environment, they're able to pull in all sorts of data and use their tools and build that context for an organization. And that's the shift that I'll see is that as defenders, we will finally be able to have that holistic contextual picture of our environment. [00:30:21] Speaker B: Okay, so this is really interesting. So when you say the benefits will shift to defenders, and I know you don't have a crystal ball, but how long do you think until we're at that stage in the industry? Were you comfortable by saying, because everyone's like, well, no, at the moment we're behind and, you know, this is what's happening. But you're sort of saying that this will happen in due course, is due course, 10 years, 5 years. I know it's hard. It's just more. I'm just. I really want to try to paint some timeline. [00:30:49] Speaker C: For me, it's more 10 years for have a strong contextual understanding. The way I talk about it is 10 years from now. I want to look back at today's time as the dark ages of security, to where the visibility, the approaches that we have in place in the future, you know, we look back and wonder, how were we ever able to do our jobs in any fashion? Because we didn't. We don't have or didn't have all of the stuff that we now have today. So that's my ideal state. [00:31:17] Speaker B: So then would you say, like, retrospect's always a good thing, right? Because it's like, oh, like, how did anyone do their jobs? And we had typewriters and we had to write by hand. Do you think we will get to that point? So if I run an interview with you in 10 years, you can look back and say, yeah, those were the dark ages. It's just that, you know, no one really knows at this point. Yes, we can predict things and we've got reports and there are analysts that are saying certain things, but no one really knows at the end of the day. And then do you think that it's going to become a moot point because you're saying machines can just defend super quickly? Do you think that will just exhaust cyber criminals to be like, well, there's no point doing this because it's being defended super Quickly. I know that sounds sort of dumb, but I'm just curious to be like, well, why do something? You're not gonna get a result anyway? Or it'll open up other issues. [00:32:02] Speaker C: I think the latter. It'll open up other issues. Social engineering, human behavior. There will always be, whether it's issues or limits or vulnerabilities in technology that the threat actors will take advantage. Threat actors are very creative. If there's a way to make money or achieve their objectives, they're going to figure out how to do it. If you look at just a general history of security started with, from the Internet perspective, it was the network. We solidified the network a bit. So then they moved to the endpoint. We solidified the endpoint. Then they went back to social engineering. Threat actors just go to wherever the weakest point is, and we're continuing to move that. That's not going to change. What those weakest points are will evolve and be things probably completely different than what we anticipate today. [00:32:47] Speaker B: So let's going back like full circle to be like we used to do. Humans, and then we leveraged technology. Now at the point where it's, if we're attacking, you're defending, we've canceled each other out. Now we'll have to find a new avenue like social engineering. So this is going to get to that point potentially. I mean, I know you don't have all the answers. I'm just trying to map it out in my mind because we have this so much emphasis on technology. I'm even hearing as an example, kids today, that generation is spending a lot less time on social media. It's like it feels like the pendulum swinging fully the other way. So I want to move on now and I want to zoom out at the world and I want to paint a picture that Europe is saying, prove your AI is safe before deployment. The US Is saying, here are the standards industry, figure it out. And I'm paraphrasing, and I know you said before, Australia sort of sits in the middle. Which one do you think is not better, but more of an optimal position? Would you say, do you think, by saying, like, here are the standards, to figure it out is better. That's obviously a lot more leeway. But then Europe sort of saying, like, justify yourself before we do something. I'm just curious to see what approach other countries can potentially take as a result. [00:34:01] Speaker C: If I look at the bigger picture, the key thing that's driving forward momentum to me is speed. It's how fast can you adopt technology, how fast can we evolve technology? How fast can I be ahead of my competitors? So everything's speed. And if you're trying to move your organization forward very quickly, technology is evolving very quickly. If you're looking at the EU approach of if you're trying to, if you need to prove that your technology is safe and secure before you implement it, to me, what's the line? Because you can prove what was created yesterday was safe and secure. But there's been an evolution a week later that now it's no longer. So it's kind of where's the line needing to be drawn? As technology is evolving very quickly, but [00:34:46] Speaker B: then people are spending more time proving and rather than doing the thing. [00:34:49] Speaker C: Exactly. So the balance is, we talk in security often of a risk based approach. And the key for me in a risk based approach is who's defining what risk is acceptable. Sometimes it's an organization, sometimes it's individuals, sometimes that's a regulatory body, sometimes it's a government. So for me the question isn't, well, what does this country have? It's what risks are we trying to address and how should we best address those risks? In finding that balance, it'll be different in countries, different countries, it'll be different in different industries, it'll be different in different organizations in the level of risk that they're willing to accept and that risk appetite that we talk about in security. So I think we'll find some norms over time. Right now this is all new and we're all trying different things to see what works for each of us. [00:35:41] Speaker B: Do you think as well, Australia being in the middle is unusual? And I say that because traditionally speaking people have always said Australia is very reserve market even when US vendors come in here. Historically over the years, the last 1520 has been really hard to sell because it's like not as us is a bigger market, little bit different mindset culturally. So do you find it unusual Australia is in the middle? I thought perhaps they'd be more erring on the side of caution than in the middle. Or would you say that's changing now? Because I also read a report, Mandy, that said Australia's in terms of organisations are adopting AI the fastest. So that just blew my mind because I just thought maybe things really are changing culturally here than. Than what was happening before. [00:36:26] Speaker C: Yeah, from my perspective, I've been involved with the Australian market and organizations in Australia for a handful of years and I agree when I first started engaging I did feel that Australian organizations were a few years behind from a security controls perspective. Just how security was or was not integrated within their organization. I have felt that shifting and I definitely see and feel today that Australia organizations are near the leading edge of both adopting AI and how to think about adopting AI. And I see that more broadly. I mentioned social engineering before. I think Australia is at the forefront of just helping from a social media approach, from an overall societal measure. I think that's been a key step that the country has made in looking at new technologies and being on the forefront of defining what's the kind of AI national plan and giving the supports to organizations and investing and putting a lot of investment in the, you know, being ready and creating the skill sets, creating the infrastructure to support that. So it's been, I have seen and felt that shift as well that you described. [00:37:32] Speaker B: And so do you think there'll be, you said about us, Europe, Australia, they've got their own approach to how they're going to do things. Do you think there'll be one country, though, perhaps that does something like the North Star and the rest will follow. So, for example, the whole social media ban thing here in Australia, other countries are going to implement it. Now, whether that's right or wrong, like that's up for people to decide, but governments in those countries think that it's a good idea. They're going to follow. Will we still see the same sort of pattern happen with this, or do you still think it's highly individual? Because there's a lot more complexity to these things than just turning social media off. [00:38:07] Speaker C: I think ultimately it's highly complex and there'll be different approaches. I do think that different countries, different organizations, different industries will try different approaches and some will expand, some will be more niche and apply in specific areas. Over time, we'll move to some norms and kind of the law of averages. Right now, it's so new, people will be trying all sorts of different things. Some will work, some won't, some will expand, some won't. So I don't specifically see, say, a certain specific country becoming the normal east at this point. I think it'll all grow to kind of an amalgamation of different things that different countries, different industries, different organizations are trying. [00:38:48] Speaker B: And then just quickly, a couple more questions before we wrap up would be when you say some things will work, some won't, do you think as a result, something quite crazy could happen, like be caught in a crossfire? Maybe it's a risk that someone couldn't foresee. Again, we don't have all of the answers, try to map them out. We try to look at the attack valve. But things do pop up, things go wrong. You think that will become a result, but then we can correct that perhaps to say, well, that didn't work out, now we have to make a better move. [00:39:14] Speaker C: I do anticipate there will be some fairly significant events that happen because we can't necessarily anticipate the full either breadth of implementation or potential ramifications. That's where going back to the beginning of our conversation and talking about guardrails and minimizing the potential impact is looking at how we're deploying AI technology specifically to help minimize what the impact of something happening will be. I do think that the biggest challenge is we can't necessarily anticipate today what tomorrow's challenges will be. And so we just need to be very adaptable. We need to focus on kind of resilience of an organization. I talk a lot about anti fragility. So the focus that is under times of chaos and stress, your organization actually is able to get stronger and really focusing on those core concepts and what does that mean for your organization so you're able to withstand and be more successful in the world that we live in today. [00:40:19] Speaker B: And then lastly, Mandy, what do you think sort of moving forward now? I know we're sort of at the earlier stages of 2026. Again, you're not Nostradamus. You don't have all the answers. It's just more what do you think? How's the year going to unfold in your eyes? [00:40:31] Speaker C: 2026 year of agents, where can we apply them? What can they do to help us? I think initial focus is a lot on both development and using agents to develop solutions and coding and then a lot of focus on the personal productivity or personal agents. And just folks starting to look at how do I spend my time and what are the things I spend my time on that I don't want to be doing or that I could have an agent do on my behalf. And I think just a lot of experimentation, a lot of trial and error, a lot of folks starting to understand what this AI technology could mean to them and trying a lot of things out and I think starting to find some common themes as we get to the end of 2026. I think we'll find some areas where it's now fairly standard practice to be using AI to handle this or AI as an input or a piece of a process or an approach. [00:41:36] Speaker A: And there you have it. This is KB on the go. Stay tuned for more.

Other Episodes