[00:00:00] Speaker A: What really distinguishes the secure creators from the prone enterprise, if you like, is how they're able to really demonstrate cyber's impact on innovation and value creation. I think the most successful sides that we see here in Australia and globally, they're very adept at weaving that narrative of how cybersecurity is able to add value back into the organization.
[00:00:30] Speaker B: This is kbk.
[00:00:31] Speaker A: Are they completely sized as a primary target for ransomware campaigns, security and testing and performance risk and compliance?
[00:00:39] Speaker C: We can actually automatically take that data and use it.
Joining me today is John Hare, Associate Partner Cybersecurity from ey. And today we're discussing how cybersecurity teams can transform to accelerate value from AI. So John, thanks for joining and finally welcome.
[00:00:57] Speaker A: Thanks kb. I'm super excited to be with you. Thank you very much.
[00:01:00] Speaker C: Okay, so John, you've got quite an interesting background which I'm sure you'll touch on throughout the interview today. But as we know, tension currency is a big thing. So I want to start with you. Talk about Secure Creator and there was a report that we can link in the show notes as well, perhaps give a little bit of an overview of what it is and what it means.
[00:01:23] Speaker A: Thanks, kp. So Secure Creator is a term that we coined off the back of our global cybersecurity leadership insight studies which we conducted last year when we spoke to 500 C suite and cyber leaders globally in a billion dollar plus companies. And we use statistical modeling to identify those organizations that are really the best in terms of achieving security outcomes for their organizations. So we call these Secure Creators and they tend to get better outcomes in terms of some things that are very measurable like mean time detection, mean time to response and the number of security incidents. But also in more qualitative sense, they get better results as well in terms of the integration of cybersecurity throughout the organization, for example. And I think particularly something that's going to talk about today is cyber's impact on innovation and value creation for the organization. I think there's really three things that we see these organizations doing differently. So they've got very specific strategies for managing each of those different types of complex attack services that we have nowadays. So cloud on prem third parties, for example. Secondly, they're very successful in integrating cybersecurity across the organization at the board, C suite level, workforce at large, and of course in the cyber team. But I think thirdly, and I think this is one thing we're really going to get into today is they're very quick to adopt emerging technologies such as AI, generative AI. And they utilize automation to orchestrate their cyber technology and streamline processes and do more with less.
[00:02:57] Speaker C: Okay, so there's a couple of interesting things in there. So when you said adopt emerging technologies now, would you say in Australia, and I know this sounds real basic, but I think it's important from the conversations I've had, we're just slower to adopt emerging technologies. And I said it's sort of an inverted commas. But what are your sort of thoughts then on that?
[00:03:16] Speaker A: Okay, so look, I think in Australia I'm very fortunate in my position. Quite privileged for you, I suppose. I get to work with a lot of the leading banks and insurers in the Australian market, solving some of their hardest problems. That's the fun bit. And also helping them to create value for their business and find these opportunities to do so. So we are seeing really significant interest in AI and how that can be used both for security, how cyber can be used to help to accelerate the innovation process as well. I think the leading sideways in Australia are definitely on top of this. I think globally and Australia will be no exception. Even amongst the most leading organizations, they're quite advanced in adopting AI in security and they are now beginning to focus more on how they can help the organization adopt securely and really speed up that innovation process and create that.
[00:04:11] Speaker C: So what do you mean by advanced?
[00:04:13] Speaker A: So if you think about how the secure creators are using AI, they definitely are.
That's not a new thing and it's definitely here and probably here to stay. So 62% of those sort of secure creators they're already using or they're in a late stage of adopting AI and machine learning. And that's against 45% of those organizations that perhaps aren't getting quite as good results. What we call, yeah, those prone enterprises. So it's not new, it's not just hype. So the UI analysis shows a really sharp rise in AI related cyber research patents and investment over the last nine years or so. And AI is now, yeah, it's, it's in the majority. It's 59% of all cyber patents that are new have got some form of AI in them. And since 2017, you know, it's been. AI has been the number one technology explored in cyber research. And I think where organizations are using this bots, AI technology is where there's a real imperative to do so. AI really lends itself to passing very large heterogeneous sets of data in real time that humans simply can't do. So direct your question instant Detection response is a really good example of where we're seeing Australian organizations harness AI and security. So nice sort of quote from one of our respondents to the most recent survey that we did on AI and security and from one of the big banks, they said that they are ingesting 10 billion data events every day and you simply can't do that without ML and AI. And they're using ML and AI to re. Automate, in fact, around 30% of the bank's security response. So we can see this is the most successful advanced organizations in Australia. They're harnessing AI within the security teams to try to stay ahead of threats. And also it's helping them to do more with less. Obviously we've got a cyber skills shortage, so AI is really helping to solve for part of that shortage.
[00:06:19] Speaker C: So how would companies identify if they were a secure creator, for example?
[00:06:25] Speaker A: Yeah, well, that's, that's it. That's a good question. So I said there's, there's thinking about meantime sex, meantime response, normal security incidents, those, those very, you know, sort of clear, measurable things, I suppose. But I think what really distinguishes the secure creators from the prone enterprise, if you like, is how they're able to really demonstrate cyber's impact on innovation and value creation. I think the most successful SIS that we see here in Australia and globally, they're very adept at weaving that narrative of how cybersecurity is able to add value back into the organization. Now, I think that what we're seeing at the moment in terms of rapid advances in technology around AI and generative AI, that really presents a significant opportunity, I think perhaps a historic one really, for security teams to become trusted partners and help the business in at least two ways. So I think maximizing value creation, helping the organization to achieve that potential from the AR tools that they're looking to implement, and then secondly, really help the business to confidently deploy AI in a way that's secure. So the opportunity here, if you want to be a secure creator, I suppose, is to move that perception of the cyber team as, yeah, as it's perceived in some organizations as just being a cost center, or worse still, a blocker. The department of. No, I think as it was being put to me before, and really being an enabler of that technology transformation that the organization is going on. I say that's the trait of the most successful CISOs. They're very adept at creating a value creation narrative about the work around the work of the security team.
[00:08:10] Speaker C: So John, you say if you're not a secure creator, what are you. So what are people then?
[00:08:17] Speaker A: So just the terminology that we used is to divide organizations into two. There's those secure creators which really are achieving the best results, and those that are, we coin the phrase growing enterprises, and perhaps they do have longer mean time to text, mean time to respond, but perhaps aren't quite as adept at demonstrating that value that the cyber team can add into the business.
[00:08:40] Speaker C: Now, would you say in your experience, majority of people are the secure creators, majority meaning the companies you're dealing with, et cetera, or prone enterprises.
[00:08:51] Speaker A: So globally, when we look at the. We divided the organizations that were part of that survey, but we did, we looked at the top 42%. So we looked at $500 billion plus, or we interviewed 500 SISOs and C Suite members from billion dollar plus companies. And the top 42% of those companies we put in that top banding, if you like, of secure creators. And then the remainder we term to be the prone enterprises is the term that we coined for those.
[00:09:23] Speaker C: So the majority is still prone enterprises from what you're saying?
[00:09:26] Speaker A: Yeah, that's right, yeah. Because we really do want to, for the purpose of this study is focus on, okay, how do you get the best security outcomes for an organization? How do you. What are those practices that really distinguishes those companies that are getting the best security results, best security outcomes, and are also the most adept at adding value back into their business through accelerating innovation?
[00:09:50] Speaker C: Okay, so you're obviously an associate partner at a big four. So what would be your strategy or advice to getting those people in that 64% into that 46 sort of percent mark? How do you encourage people and show them that this makes sense? What would be your sort of approach?
[00:10:08] Speaker A: So I think in the moment, what we're going through is this really significant change, I suppose, in technology around us and really focusing on two things. I suppose we're talking about the moment around AI is to think about AI for cyber. So think about how you can use AI in their teams to get better results, but also think about how they can add value to the business through securing AI as well. So we know that there's a number of risks that organizations need to be thinking about, both from adversaries, from employees, and the way that perhaps they might go about using AI in a way that's insecure. So from the perspective of the bad guys, we're seeing attacks like prompt injection attacks, data poisoning. We have researchers demonstrating how you can use it. Sounds pretty much inaudible to the human ear, but they can inject malicious voice commands into AI powered voice assistants and then around employees, I think there's really significant risk there, particularly to data confidentiality where employees are inadvertently breaching compliance or regulations when using AI, exposing sensitive data into AI models. In particular, there's privacy and data retention issues where organizations are perhaps trying to get, trying to generate outputs and insights from customer data. And perhaps that data is being used in ways that go beyond the purpose for which it was originally collected and for which consent was given. So clearly those development environments need to have very, very strong controls and be well protected as well. And of course that threat of shadow AI, ungoverned implementation of AI is, you know, is, is a, is a real and present threat. So I think taking all those little things into consideration then, yeah, there's a number of pieces of advice that I think would probably give to organizations at the moment. What we're seeing really, really good organizations doing is look to automate some of the team's most manual tasks at the moment. So think about how you can audit those to really identify the manual tasks that in which AI can be brought to bear. Secondly, follow the data. So AI and ML investments are going to be most profitable where there's a cyber data density, if you like, where there's large amounts of heterogeneous data that you're trying to get actions from. There may be professional judgment in part of that process. So I think things around security operations at the moment we're seeing AI and ML already has a foothold. We're seeing emerging areas like continuous threat exposure management and even in what they're likely to be some of the next cabs of the rank, where we're going to see significant improvement in processing capability through AI and ML. And I think it's really important that organizations, once they've figured out what the requirements are to stay up to a date on the emerging applications in AI and security, so they can jump on those opportunities to really improve processes. But I think again to your question, what do these organizations need to do to lift their game? I think that whole idea of cyber for AI is super important as well. And there's a number of things that organizations or security teams can be using, can be doing to help their business counterpart. So for example, I think the organization should be thinking about establishing AI principles and guardrails to support experimentation with AI inside the organization. And there's all sorts of useful materials that you might go, might use to go about doing that, such as NIST's AI risk management framework or the EU AI Act. Secondly, I think it's really imperative that organizations that CISOs and their teams help the business to get use cases to market faster. And the aim here is to make sure that secure by design is absolutely the fastest route to market in your organization. So that means developing pre configured and pre sanctioned architectures, integration patterns and tech stack components to support business use cases. And then I guess my other sort of tip around cyber for AI would be to try to embed cyber professionals really early into that AI use, case identification, procurement and governance process. That early stage insertion, you know, allows you to integrate cyber into those innovations, you know, commensurate with the sensitivity of the data and the, and the business function.
[00:14:38] Speaker C: Okay, so there's a couple of things in there which is really interesting that you're saying and I want to explore a bit more because I do believe that this is important for people to understand in more fidelity. So going back to what you were saying, would you also say, in addition to all the strategies you listed out, it's just going to take time and I really hate that as an answer. But again there's still, you know, that number of that 64% is still outweighing in terms of the prone enterprises is still outweighing the secure creators. So that's obviously going to take a little bit of time for that shift to occur.
How long do you think that'll take? And I know it's such a, such a terrible question, but I'm just always curious to see the adoption rate from organizations.
[00:15:20] Speaker A: Yeah, okay, well I think that's a, it's a really good point, Carissa. I think putting time together on these things is always tricky when we see the rapid sort of change in technology as we go. But I think these things are happening very quickly around us. And I guess that what I'm suggesting here is to think now about where you've got processes that are very manual but could perhaps be sped up in the future, really understand those requirements then as the technology to improve, you know, as the vendors catch up with those various use cases. If you're ahead of the market and understand what you need, you can then take really, really quick advantage of those of those new technologies as they emerge. But I guess it's pretty quite interesting just to think about some of the things that we're going to see coming, you know, coming down the pike if you like. So yeah, I think, you know, as I mentioned before, it's cyber dense data think is where we should be looking. So no matter how smart humans are, you know, even armed with the best spreadsheets because many of us in consulting, like, you know, machines tend to be much better at repulsing in real time those large amounts of heterogeneous data. I think that gives you a clue as to in the future where we're going to see more and more use of AI in security. So yeah, I think continuous threat exposure management is one of those examples that I think we're going to see emerging in the coming sort of months, I think, and we're going to see more and more organizations adopting this. So I think that the problem we're trying to solve here is security teams need to think about a whole host of exposures that they need service owners and system owners to take care of. So vulnerabilities, pen test findings, expired security certificates, end of life, et cetera, et cetera. It seems it's quite a long list and there's a lot of things that the security teams were asking the technology teams to do. So the real challenge there is how do you offer some sort of prioritization in this big list of things that you're asking service to do. If you like, what's the next best action? I think at the moment a lot of that is being solved manually or there's a large amount of professional judgment, if you like, in making those calculations. So a much better model, I think, than we're seeing some organizations move to this, is to use AI driven tools that can take all of that heterogeneous data and combine that with even more sets of data such as cyber threat, intelligence and even the value of the data that are on the systems in questions, and enable security teams to say with confidence what the next best action is in a really data driven approach that doesn't really rely on professional judgment, but really says these are the top 10 actions in order of priority that I would like you as a system owner to take. And I think we're getting to a stage where it will be able to say if you do the first three, the risk buy down will be X dollars. If you do the first, you know, the first 10, then the risk buy down that will be Y dollars. I think that's a really, really big advancement. You know, when we go beyond professional judgment, there's much more data driven approach that AI enables. And I think automation, of course, you know, is the big story around AI and the opportunity that it presents for organizations to improve their security and the functioning of their security team. So yeah, the most effective slide, say, is that they're already looking for areas where AI enabled Automation is most suited for replacing manual processes. So a couple of conversations that we had during the course of the interview process for the most recent research that we did, we had one slide say, yeah, he's anticipated a world in the not too distant future where they're no longer writing playbooks in soc. There's going to be an AI engine that's going to have the context to really understand what the next best step is for an analyst and would recommend that step or better still would actually perform that step. Another example I think is threat hunting as well, where we conducted an interview with another sort of senior security figure in an organization where we think of threat hunting as one of those things where increased automation, et cetera, in the SOC frees up humans to do more valuable tasks. And threat hunting would definitely be one of those. But at the moment that's a really quite manual process. It involves a lot of coding, developing scripts, running them across the environment. So the situation that that particular person envisioned was, yeah, automating large parts of that process so you can identify malicious activity. Yeah. And respond much, much faster.
[00:19:52] Speaker C: So would you say maybe, I mean, there's a lot of key drivers into that adoption, becoming that secure creator. But would you say from what I'm hearing and what you're explaining is just removing a lot of the manual process, like no one wants to do trivial menial tasks, you know, repetitively. Right. So would you say now are saying, well, hey, if we can have a reduction of arbitrary like 60% of manual tasks, it means they can be freed up to do more critical and strategic tasks. So would you say that that's going to be what perhaps spurs people on?
[00:20:20] Speaker A: Yeah, absolutely. And I think that's, that's the imperative as well. Chris, if you, if you think about the cybersecurity skills shortage that we've got, I think that really creates the imperative for greater automation because yeah, we just don't have the people otherwise to do what we need to do. So that's really critical. And of course what that does mean, you know, we just spoke about threat hunting, but I think moving humans up the value chain is really, really important. I think another sort of good example of that to me would be around third party risk assessments tends to be pretty manual. Very often there's lots of long questionnaires that we, that we give out and you know, perhaps interviews and things like that with third parties. But I think what we can do with AI models, particularly large language models, is that document heavy process that we go through of reviewing all those documents. So you can actually get those ingested into the AI model and you can then interrogate those documents, you know, through a chat bot, perhaps. And we've, we've done some experimentation with this at EY that produce some great results where you can sort of interrogate and say, you know, for example, is there a software development security lifecycle process? And then it will sort of enunciate what that looks like. And then you can sort of dig down to the next layer and say, you know, what is the mandatory training perhaps for software developers? And they can come up and give you that next level of detail. I think, kind of, I've seen this work beautifully actually, where you've actually got the AI model is able to sort of, if you like, really fill out that long Excel spreadsheet of questions that otherwise, you know, would take a very, very long time for humans to fill out. And they come almost instantaneously with the AI model. And of course, what that means is that frees up humans to do more valuable tasks, you know, whatever they may be. So perhaps you can get into perhaps deeper conversations with the third party that you're dealing with. Perhaps you can do things like have security team to security team connections with those third parties and really achieve better results for both organizations.
[00:22:21] Speaker C: Would you say as well, with where you're sitting, people appear to be rattled still by AI because again, maybe mainstream media is positioning things in a way where it's doom and gloom and, you know, AI is going to replace their jobs? Like, would you say that there's a reservation from people perhaps because they're fearful that, well, if I do all these things, John, I may have to make 30% of my team redundant, you know, even though we know it's a tool and all of those things. But are you still seeing that with people in these types of roles that you're dealing with?
[00:22:50] Speaker A: So, no, I think when I'm speaking to security teams and Sys at the moment, I think they are definitely seeing that opportunity for automation and they're seeing that opportunity for being able to do more with data to produce better results. We spoke about that threat exposure management, for example, and how you can remove professional judgment from that. They come up with really, really good, solid instructions on how to reduce risk within the organization for, you know, for colleagues in the technology teams, for example. So look, I think in terms of the cyber, I think that's been really, really well received. Generally people see that as a huge net positive because it is going to enable them to do more or less be quicker at responding and identifying threats. I think that's, that's a, that's a thoroughly good thing. Of course all security teams are thinking about what that actually means for them and they're finding their way through and perhaps thinking about what are the real use cases that we've got and all the technologies out there to help. And clearly that's going to be an ongoing journey which is going to take some time and continue. It probably is a little bit more reservation and trepidation I suppose about making sure that organizations are securely adopting AI in their businesses. I think that does cause concern in many of the security teams and we see, are seeing, you know, amongst the secure creators, they're perhaps a little bit slower with thinking about cyber for AI as opposed to AI for cyber, but they're definitely now sort of getting on board that train. But there's a, yeah, there is a real sort of acknowledgement of some of the really inherent risks in getting this stuff wrong. Even if that's completely inadvertent on the path of employees. But I think, you know, where there aren't clear guidelines, et cetera, where secure by design is not by default, then there is that real risk of exposing sensitive data through AI models. There is a real risk of privacy and data protection issues where perhaps data isn't secure in the development environments as it should be, or where it's being used for purposes that is in breach of privacy regulation, for example. So I think that we are, we are going to see that the best performing organizations, the best, the best performing CISOs and their security teams, if they are starting to think about cyber for AI and how they can help to secure that journey that the organizations are on and avoid some of those pitfalls.
[00:25:24] Speaker C: How about we speak a little bit more generally and let's look at all executives because I know that we focus heavily on CISOs for example. But perhaps let's understand more about AI being a threat and an opportunity and how do companies sort of combat this double edged sword and how do we get people to see that there is a benefit in the adoption to AI versus not adopting it? Maybe a little bit more general sort of understanding would be good to know.
[00:25:55] Speaker A: That's a great question. I think the leading organizations in Australia, we are definitely seeing a great deal of consideration by, by the execs and business teams on how they can bonus AI in different ways to basically perform better services for their customers. So definitely part of that is around process improvement. Much as we've talked about in cyber as well. I think there's lots of opportunities for doing that throughout the organization. But I think also in terms of that data piece getting better, deeper, more helpful insights from the data that you have your customers so that you can service them even better, really give them things better, faster, quicker and more appropriate. But of course, there's risk attached with those things. Particularly think about getting insights from customer data. There's obviously privacy and security concerns and considerations around that. So I think what's really required here is going back to that idea of, you know, organizations as secure creators. It's really important that there is that dialogue and that really close relationship with the security team, that they are your allies in this or they should be your allies and you should challenge them to be as such, they should be there to help you to accelerate and be bold in taking steps in AI to improve your products and the way that you treat customers by giving you the confidence that you are doing things securely in ways that isn't going to have bad results in terms of, you know, data breach, data leakage, et cetera. So I think that's, that's really important that there is that close relationship with the security team and that is fostered because. And I think if I just think about, you know, the microcosm of my own practice and how we are approaching AI and how we're helping organizations, but particularly the security teams with this, that's, that's changing along these lines as well in terms of those partnerships. And let me give you a specific example. So just a week or so ago, we met with a really key banking client to talk about their strategy for the next 12 months. And when we came to that meeting, you know, we thought about, okay, well, who can we bring? Who the right people to bring? And it wasn't just, you know, sort of myself from the security team, we also brought some from the data team and sort from the AI team as well. Now, the interesting thing was when we were talking to that organization, there wasn't a. This is supposed to be a cybersecurity conversation. Why have you brought these other people from other teams along? It was absolutely natural and appropriate to really bring all those parts together because the conversation is broader than perhaps that traditional notion of security. And that's what's going to distinguish the really successful organizations going forward. You know, in that case, that particular security team and their leader, they will be successful because they're trying to articulate the value of cybersecurity to the enterprise, you know, in this new sort of AI era. And I think that's. That's definitely what's going to mark out the good security teams. But I think in terms of broader executive team, kb, as you sort of referred to there, and how they can overcome some of the fears and concerns, I think that is about creating that really, really close partnership with the security team and challenging them to do better, challenging them to help them to give the confidence to be bold, you know, as they step forward with their AI strategies.
[00:29:25] Speaker C: Do you think as well, you mentioned before, like earlier in our interview, around like adding value back. Do you think as well, like more general executives are perhaps you know, going, oh well, how much is all of this going to cost? And you know, I've just spent all this money on cybersecurity, for example, and I know that's changing the conversation around cybersecurity becoming an enabler and AI reducing all of these things and reducing costs, et cetera. But do you think there's sort of that element to, you know, for example, if you're cfo, your focus is how much money is the business making and then how much is it spending on certain things? Right. So would you say if I just give that example, people are very focused on how much this is going to cost? John?
[00:30:08] Speaker A: Absolutely. We can't escape the economic reality that we live in. Of course, cost is always going to be a huge concern and arguably a growing one at the moment. And I think when we talk about securing AI and cost, then I think there's a few things that we need to be thinking about. So, yeah, there are opportunities for both getting better outcomes and hopefully in time getting outcomes that require less human input. That's a good thing because that can help reduce cost over time. But I think more so, let's face it, the threats aren't going away. They're arguably getting worse, bigger as attack services grow, et cetera. But it does mean that we can then attack tackle some of those new problems as technology changes and we can move humans up the value chain to do more around that. But I think the other thing that really sort of interests me at the moment, the conversations that we're having with some of our mature clients, kb are around cyber risk quantification as well. So, you know, trying to think about the measures that we're taking. Well, how much security is enough? How much security spend is the. Is the appropriate amount? How do you right size that budget? And I think we spoke a little bit about that, if you like that fairly tactical view of figuring out what is the next best action in terms of continuous threat exposure management. And if you can put a value on data, et cetera, inside your organization. That gives you the ability to actually start thinking about risk. Buy down by what, by taking certain measures. What is the, what's the dollar value in the reduction of risk that you're, that you're taking? I think you'd also take a more strategic view of that as well in terms of risk quantification and really start thinking about what are the top three or four cybersecurity scenarios that we think we most credibly base as an organization and what is our exposure to those in dollar terms? I think once you understand that, that can be quite a powerful thing because if you can then model those against your current control state and then perhaps more let against a control state that's, that's improved because of the investments that you're making over time, that can give you a real view of what your return on investment is. Looking at the difference between those two scenarios, I think we're increasingly seeing organizations, least at the more mature end of the spectrum, trying to move towards this way of thinking. That's a, that's a journey that's perhaps going to take time. I think we're going to see more cybersecurity teams doing this internally and I think over time they're going to start to include some of this in their reporting as well. So trying to demonstrate how the actions they are taking are helping to buy down risk and, or how the changes in the threat environment, changes in business practice, et cetera, you know, how that's influencing the amount of value at risk that an organization has from a cybersecurity perspective. I think that's a really exciting part of the approach to cybersecurity that we're going to see evolving over coming months and years.
[00:33:16] Speaker C: What about going forward now? What are your sort of thoughts or what's your hypothesis now? There's no right or wrong answer. It's just obviously you're, you're at the coalface each day speaking to clients and you know, you've obviously quite well versed in the report that we've discussed today. So do you have any sort of thoughts on what we can expect going forward?
[00:33:35] Speaker A: Yeah, so look, I think let's think about those things in two ways. So let's first of all think about AI for cyber. So I think at the moment we've got organizations are definitely starting out on this journey, but I think there's a whole lot more to come and I think where we're going to see that is around greater automation and Then also I think sort of really deriving value and benefit from those cyber data rich sources of information and getting valuable outcomes and insights from those. So I think where we're seeing detect and response, definitely that early adopter of AI now I think we're going to see more in terms of those cyber data dense activities around identity and access management and we're going to see more in that space and also around the exposure management as well. Then I think with cyber for AI we're going to see with the good organizations I think we're going to see much more of a partnership with business around this. You're going to see cyber professionals really being embedded into some of those business decisions that are made around the way forward, that use case identification and governance processes, et cetera, the establishment of BA principles, guardrails and helping business to get those use cases to market really quickly, making secure by design the fastest route to market in their organizations. Those that aren't able to do that, I think they're going to either adopt AI much more slowly in their businesses or do so in an insecure way and suffer the consequences unfortunately for doing so.
[00:35:16] Speaker C: And do you have any sort of closing comments or final thoughts you'd like to leave our audience with today?
[00:35:21] Speaker A: I think perhaps some of the things that I speak about, some of the advice. So I think that if organizations, if security teams want to be insecure in that secure creator bracket, they need to be thinking about both cyber for AI and AI for cyber. So that AI for cyber piece I think what good looks like going forward is thinking about so number one thinking about the most manual tasks that you've got at the moment, whether they can be automated, secondly, really following the data, the investments that you're going to make are going to be most profitable whether there's that cyber data density, whether that's secure operations, threat exposure management, the next cab off ranks like to be around identity and access management and then finally staying on top of the emerging applications of AI and security so that you can match some of those emerging applications to your requirements. And then on the cyber for AI piece I think that embedding cyber professionals into those AI use case identification, establishing principles and guardrails to really support that experimentation with AI and helping businesses to get the use case to market whilst they're making skilled by design the quickest way to market just to really reduce that friction and really making that the best way Forward.
[00:36:46] Speaker B: This is KBCast, the voice of cyber.
[00:36:50] Speaker C: Thanks for tuning in for more industry leading news and thought provoking articles. Visit KBI Media to get access today.
[00:36:59] Speaker B: This episode is brought to you by MercSec. Your smarter route to Security Talent Mercsex Executive Search has helped enterprise organizations find the right people from around the world since 2012. Their on demand Talent acquisition team helps startups and mid sized businesses scale faster and more efficiently. Find out
[email protected] today.