October 04, 2024

00:47:53

KB On The Go: ISACA Beyond Tomorrow

KB On The Go: ISACA Beyond Tomorrow
KBKAST
KB On The Go: ISACA Beyond Tomorrow

Oct 04 2024 | 00:47:53

/

Show Notes

In this bonus episode, KB is on the go at ISACA’s Beyond Tomorrow Conference in Melbourne. KB sits down with industry leaders like Erik Prusch, CEO of ISACA, as he discusses the organization’s expanding global influence and their pivotal role in career development for their 180,000 members. They also delve into the critical topics of AI’s transformative power across sectors, the intricacies of third-party risk management, and the indispensable importance of mastering basic cybersecurity practices. Erik is joined by fellow experts such as Jamie Norton, Chirag Joshi, Francine Hoo, Kate Raulings, Richard Magalad, Sam Mackenzie, and Wayne Rodriguez, who also bring their expertise and stories to the table, sharing the newest developments and challenges in cybersecurity and critical infrastructure.

Erik Prusch

Erik is an experienced CEO and board director for major tech companies. Prior to joining ISACA, he was most recently chief executive officer at Harland Clarke Holdings Corp., a provider of integrated payment solutions and integrated marketing services. He has also served as CEO for Outerwall, Lumension, NetMotion Wireless, Clearwire and Borland Software Corporation. Additionally, he has been a board member for RealNetworks, WASH, Calero Software and Keynote Systems. Previously in his career, Erik served as chief financial officer for a number of public companies, such as Identix and Borland, and for divisions of public companies, such as Gateway Computers and PepsiCo. He began his career at Deloitte & Touche (then Touche Ross). Erik holds a bachelor’s degree from Yale University and an MBA from NYU’s Stern School of Business.

Jamie Norton

Jamie Norton, CISA, CISM, CGEIT, CISSP, CIPM is a Partner at McGrathNicol, a specialist Advisory and Restructuring firm committed to helping businesses improve performance, manage risk, and achieve stability and growth. He also serves on the Advisory Board at Avertro, a cybersecurity startup enabling informed and defensible data-driven decisions about organisational cyber resilience and AI safety. He has over 25 years’ experience in managing security resilience for State and Federal Government agencies and commercial organisations. He is the former Chief Information Security Officer (CISO) at the Australian Taxation Office (ATO), one of Australia’s largest federal government agencies, where he led the security governance, risk, intelligence & operations, testing and forensics teams. He has chaired and supported several senior industry and interdepartmental committees on cyber strategy and resilience and the senior Australian representative at international government forums on cybercrime. He has previously held leadership roles at NEC, Tenable, Check Point, and the World Health Organization.

Jamie has been involved with ISACA for nearly 20 years, at the local chapter board, conference organiser and most recently with the CISM Certification Working Group. He holds degrees in accounting and information technology from the Australian National University and is an affiliate member of Chartered Accountants Australia and New Zealand. Jamie is a regular and accomplished industry speaker and media commentator on cyber security. He is based in Australia.

Chirag Joshi

Chirag Joshi, a multi-award winning cyber security executive, brings extensive experience in leading cyber security and risk management programs across various industries, including critical infrastructure sectors such as financial services and energy. His expertise in both IT and OT environments, coupled with his experience in managing cyber security through mergers and acquisitions, makes him uniquely qualified to address the challenges of the SOCI Act. As the author of bestselling books on cyber security and a recognised thought leader, Chirag offers valuable insights into practical implementation strategies and behavioural aspects of security awareness. His role as Founder and CISO at 7 Rules Cyber, combined with his experience in leading multi-million-dollar cyber transformation initiatives, positions him to provide actionable advice on navigating the complex landscape of critical infrastructure protection, supply chain resilience, and cyber risk management in the context of the SOCI Act and beyond.

Francine Hoo

Francine is a Director with KPMG’s Data team focusing on building trusted data practices. She has helped build, assure and audit multiple frameworks including governance, data management, data analytics practices, privacy, risk and compliance. Having started in audit, she leverages her combined experience to help build evidenced based, human centric, ethical and trustworthy data practices. She’s helped teams build AI Assurance frameworks to ensure safer and reliable deployment of AI based outcomes. She passionately believes that humans are accountable for the right use of data and therefore the sufficient and appropriate risk management of data operations in all its forms – including AI and automation. The future of data driven outcomes including AI is dependent and strengthened by the partnership of a diversity of thinking, where humans collaborate with tech.

Kate Raulings

Kate didn’t start her career in cyber security. Computers and internet connected devices weren’t common at the time. She has a deep understanding of business imperatives developed over a decade’s experience in senior communications and innovation roles before focusing on IT strategy, governance and cyber security for the last 8 years. She regularly briefs senior executive, audit and risk committees and boards on privacy and cyber security matters and has supported numerous organisations through notifiable data breaches. Kate has a Masters in Marketing and an MBA from the University of Melbourne as well as CISM certification. She has won local and global recognition for her success in digital communication and was a finalist in the Women in ICT Awards in 2022 and 2023. She is a member of the Australian Women in Security Network and an ISACA member. Kate is the CISO at EPA Victoria.

Richard Magalad

Richard is a 30 year veteran of the ICT industry starting at the Commonwealth bank and was a 10-year IT director from 2010 at a mining company with gold and diamond projects in Australia, Laos and Canada. Current projects are systems integration for two of the large Telcos, several agencies in the Australian Federal Government. He is a hands-on tech with a philosophy to never separate Cyber Security from Information Technology, just as he was trained in highly secure arena in banking and government He consulted and trained cyber security in SE Asia to governments and critical infrastructure enterprises on missions for Dept of Foreign Affairs and Trade and with RMIT University, where he now lectures cyber security to professional students. He was an executive committee and secretary at the Australia Computer Society (Victoria) until 2022 and the current chairperson of Cloud Branch with the Australian Information Security Association.

Sam Mackenzie

Sam Mackenzie, is a Cybersecurity committee member with ACS Victoria Branch and brings 25 years of experience, where he speaks straightforward cybersecurity and technology with business leaders. Having worked with global brands overseas and household names in Australia, he’s known for creating high-performance teams across the sectors of health, telecoms, energy and more recently local government. His approach is characterised by structured thinking, simplifying complexity and developing culture as a catalyst for change.

Wayne Rodrigues

Wayne Rodrigues is currently a Security Architect at Insignia Financial and an active member of the cybersecurity community.

Having been involved with ISACA Melbourne in the very early stages of his career, he has remained an active member and volunteer for the past 12 years. He is also part of various other initiatives such as the Purple Team Australia mentoring program and the EC-Council Career mentoring program. Being a keen advocate for continuous learning and growth, he loves mentoring others in the industry. Wayne believes these initiatives an excellent opportunity to give back to the community and help mould the next generation of industry professionals.

View Full Transcript

Episode Transcript

[00:00:16] Speaker A: Welcome to K beyond the go. And today I'm on the go in Melbourne at Osaka's beyond tomorrow conference. I'll be reporting on the ground here at Collins Square Event Centre. Beyond tomorrow focuses on technology, process and people with five keynote sessions, 24 presentations, panel sessions, fireside chats with over 30 thought leaders and subject matter experts, plus so much more. So for this bonus episode, I've lined up a few interviews from the members, including CEO of Osaka, Eric Prush. So please stay tuned. Joining now in person is Eric Prush, chief executive officer for my saker. So Eriche, thanks for joining all the way from the US, and welcome. [00:00:57] Speaker B: Thank you very much. [00:00:58] Speaker A: So Eric, you've recently joined Isacca. Just over a year ago as the CEO, so maybe talk us through your vision. [00:01:05] Speaker B: You know, I joined ISacca to lead an organization, you know, that's been around since 1969. 180,000 members across the globe, 225 chapters affiliated with ISAC in 188 countries. And the opportunity for me was a great one, which is I wanted to find an organization that not only had a reason for being, but did it better than anybody else. And one of the things that ISACA has really showed me is what the capability of 180,000 people all moving together in a direction to create opportunities for all of them can mean. How does it work? And ISAC is one of those rare instances where an organization is so passionate, so engaged not only about their professions, but the membership in Isaka, that it's astounding. There's nothing like it. And that's demonstrated by not only our returning members, we have a very high retention rate on members, but how long they stay members with us and how active they are in the chapters and in really leading Isaka forward. And the vision is really about those members. It's about bringing capability to all of those members, to allow them to pursue their careers, to pursue their journeys, and make certain that they are the experts at what they do, that they are the leaders and they are the knowledge centers within their enterprises and are able to deliver, outsize impact into their organizations. That's what we do every day. That's what we wake up and that's what I get excited about. [00:02:42] Speaker A: Do you sort of go on the ground a lot and speak to the members? So you're sort of hearing it at the coalface around, you know, what people are up to, what's happening, what's their viewpoint? Are you doing that a lot consistently? [00:02:53] Speaker B: I think that's part of what we think about our advantage is that 180,000 across the globe, I liken it to being crowdsourced. Right. We're crowdsourcing not only where this industry should go, they're also crowdsourcing. What are the ways that we're conveying information and feeding them in terms of where they're going in their careers and how they're getting there. What that allows us to be is real time in what we do. So we can be fastest to create solutions, but we can also be the most instructive in terms of having the ability to not only certify or develop content or develop training. We're end to end in terms of those members where others may just focus on certification or they may just focus on training, were able to provide, I think, an end to end for those members. But 180,000 people are touch points into all of their enterprises and all of the geographies that we represent. [00:03:53] Speaker A: The reason why I ask you that question is like you've just touched on 180,000 people with a lot of people and a lot of CEO's and executives. They have a security detail. They're in their car. They're not going out and talking to the person that's on the front line or at the front desk. So I think it's important that people who are listening to this, who are Osaka members, are willing to become an Isaka member. They understand that because I think it's important. And I've watched a lot of that undercover boss show, and I think that sometimes when those people go undercover and actually see at the cold phase what people are dealing with, it really changes their perspective. And I think that's an important thing because it is built on members. So I appreciate you sharing that. So I'm aware that AI is a focus for you, so maybe tell us more. And I know that it's being discussed a lot here today, and I've had a few sort of soundbite interviews with a few people touching on AI. But maybe I'd love to hear your thoughts. Wherever that takes you, please start. [00:04:50] Speaker B: AI is a new frontier. Everything's changing as a result of it. How we do our jobs, what areas of opportunity there are, what areas of threat there are. Everything that we've touched is changing with AI, so much so that it's becoming a part of our common speak. ISAC has delivered six new courses in record time in order to make certain that our members understand AI. But we're uncovering what the needs are. We're uncovering what the level of understanding with our experts are and making certain that we're helping to guide our members going forward. I've said it before, I think one of the biggest challenges for us is that we don't even understand the problem. Cybersecurity, it audit, risk assurance, governance don't even understand yet the quantity of the problem, the proliferation of the problem, much less develop the solutions. And we are doing that every day, not only using our 180,000 touch points, but making certain that we're providing as real time as possible, the understanding to help guide our folks to be able to deliver for their enterprises. And to me, that is something thats unique because of our network, and its something that creates an opportunity for us to influence regulatory bodies, influence governments, influence enterprises along the way. And thats what we need is a collective force around that. But AI is a new frontier and we're excited about it. We think we've got a major role to play in it. These six courses are just scratching the beginning surface of it. But we're evolving to a point where we're speeding up the knowledge, we're getting it to market faster and faster than we've done in the past, which is also making certain that we're responsive to the needs of our members. And remember, our members aren't our customers, our members are our stakeholders. And when we think about what we're trying to equip them, it's not to sell them a product, it is to make certain that they're informed and the most informed of anybody within their enterprise. That's a much more aspirational goal. [00:07:02] Speaker A: So a couple of things, nay, you said we, as in Isaka, play a major role. What does that look like? [00:07:08] Speaker B: It's end to end understanding of how this is coming into the enterprise. We need to know where threats are. We need to understand the procedures that we're going to deploy. We need to understand the risk that enterprises are taking on when they're introducing AI into their companies. It is understanding it end to end, as we've done before. And the only difference is when that landscape changed nine months ago significantly, and it wasn't that AI was created nine months ago, it came to such a significant role within enterprise nine months ago that all of a sudden we now are into that crunch phase of making certain that we understand it end to end, and it's still evolving. So we're trying to get out in front of something that is constantly evolving. The only way you're going to do that is with scale. You're going to do that with those touch points, and you're going to do it by being responsive to those stakeholders in the ways that they need to be informed. But we still have so far to go. When we ask our members how well do they understand it, it's not enough. When we ask enterprises how well do they understand it, it's not enough. We've got to go further, faster and in a much more significant and scalable way than we've ever done before. [00:08:25] Speaker A: And how would you know if you understood something enough? [00:08:28] Speaker B: So I don't know that there's ever enough. Right. But there gets to degrees of confidence. Maybe the incidences go down through time as a percentage of total. Maybe our confidence of being able to anticipate problems gets better. But I dont know that theres ever enough. I dont think that theres ever been a time that we said yeah, were good from a cybersecurity standpoint, because we continue to have breaches and its not AI related. These are old school breaches. Whether it's updates or other problems that get thrust into our environments, the dependence on working with other institutions or other enterprises gets impacting ours. Right. So making certain from, if you think about what we have from a digital trust perspective, it is to make certain everybody is working in a coordinated manner, that you understand all of your vulnerabilities, whether in your firewall or outside of your firewall, suppliers, employees, customers, every aspect is a potential vulnerability. What we have to do though is we have to evolve faster than the problems are evolving in order to make headway on it, and we're not at that point yet. Meaning as an industry, as cybersecurity, it, audit, risk governance, we haven't evolved faster than AI is evolving currently. [00:09:52] Speaker A: Do you think as well, from my understanding of interviewing people like yourself on my show, the parallels that I'm drawing is, like you said, it's a new frontier, it's a new sort of thing in terms of how familiar people are with the problem. Do you think this is going to take time? And I feel like that's such a cop out of an answer because everything of course takes time. But what I mean by that is look at when the Internet came out in the nineties, right, people were saying it wasn't going to go anywhere. It took a bit of time for people to understand how it works, how can people leverage it, whether it's for good or bad? Do you think the same sort of approach is going to happen with AI? [00:10:25] Speaker B: I think that AI has the ability to proliferate well beyond what the Internet started out as. And while there is definitely a parallel, which is there's always going to be bad guys. There's always going to be malfeasance. There's always going to be some challenge that is identified with everything that we do. I think the fact of the matter is, what we've got is a different scale of problem than we had when the Internet started. In the nineties, when you had a bad website or you had a bad actor, it was pretty easy to identify, or it was easier to identify, and it was easier to discontinue because it wasn't everywhere all at the same time. I think with Aih, it is in more aspects than the Internet is. It's not just in one medium. It's not just through one access point. There is now infinite numbers of access points. And I think that makes the scale of the problem greater. I think that means that our knowledge and capability has to be much greater than it's been in the past. We've got to retool reskill along the way. [00:11:35] Speaker A: So going back to your comment around, like, people quite don't understand, sort of maybe the fidelity of the problem. Do you think anyone on this earth really does understand it, though? [00:11:46] Speaker B: No, I don't. And even if people understand it more than others, the question is whether they're going to be using it for good purposes or whether they're going to be using it for bad purposes. I think what we've got to do is make certain that we're setting those standards, that we're having those discussions, we're making those determinations. How society wants to use AI. I think that is as important as anything else. I think there's a lot of money that's behind AI. There's not only the money around developing AI or incorporating AI, but there's also market capitalizations that are reflecting whether or not AI is within. And we're talking about trillions of dollars. We've seen Nvidia grow so dramatically at the possibility of AI, and that's just from a chipset standpoint. We haven't even thought about the downstream consequences. So you've got a lot of money that's moving in to try and exploit AI. And now the question is, how do we make certain that that is all being done consistent with what society wants to be done? Whether that's ethical, or whether that's ethics, I'll say, or whether that's understanding vulnerabilities, or whether that's understanding IP. I mean, you think about the problems. They're significant. None of them are easy. There's not one problem in this that's easy. Ethics is not easy. While an enterprise can determine the ethics that they want to deploy, the fact of the matter is what matters equally as much as what are the other competitors to that company are doing and how are they doing it then? On top of that, it's what does society or the local governments or federal governments of countries want to do on top of that? And what you see is lots of layers that we've got to permeate or penetrate through. I'll say, in order to make certain we've got those common structures necessary in order to make this to be productive and be utilized in the best possible way. [00:13:46] Speaker A: I've spoken to other Isaka people, Mary Carmichael, Jenai Marinkovic, about regulation, what ISACA is doing in that space. But I want to hear maybe your perspective on how do we get to that point of regulation. But also in my discussions with people, they're sort of just saying, well, the government needs to regulate it. Do you think people are perhaps just throwing that problem over the fence, be like, okay, well, you guys deal with it, and do you think anyone really knows how to deal with it effectively? [00:14:16] Speaker B: Yeah, I think we're still learning. I think advocacy is certainly an important element of it. But I can go back to some reactions that were really difficult, were not guided well. Like the adoption of socks was a really good one in the two thousands, early two thousands, where it was in response to a problem, it had little guidance to it, it had marginal effectiveness, and it had a very long tail that required evolution through time to get better. What was the purpose of it? How did the regulation come to be? And that was a very narrow problem that it was trying to solve, or relative to AI, a very narrow problem? Yes. I dont think its good enough just to say the government will take care of it much the same way as we have regulation today around anticompetitiveness. That doesnt seem to be working in terms of preventing problems from occurring. Weve got to do better. Weve got to engage in the conversations. Weve got to get the adoption of a set of standards that we can all understand and benefit from and self regulate as much as outside regulation is going to do. If we don't, then I think the problem is going to continue to proliferate unchecked as it is, I think, today. [00:15:35] Speaker A: So I like to live with the philosophy that majority of people out there in the world are good people and they are bad people. So to your earlier point around, some people may use AI for good and for bade. I so would your following that sort of track a bit more. Would you say that majority of people will use AI for good, or do you think that maybe more people may use it for bad? If you had to hypothesize, yeah, I. [00:15:59] Speaker B: Wouldn'T even want to speculate on it, because I believe the same thing. I think more people are good. I think the bad guys are in smaller numbers, and I think that when acting for good, will stand the test of time. And if you're acting for Badland, it will be short lived before somebody figures out that problem. I think we are paid to make certain that we enable organizations to grow. We enable organizations to grow the right way. And if you believe that people are good and they want to be a part of good long term standing, economic value creation, and that the organizations behind it want the same things, I think we will get to the solutions in short order. I think the problems will be very clear over time, and there will be a long tail that we're having to overcome. Just like we had a long tail on the Internet, just like we had a long tail on cloud, just like we had a long tail on personal information and security, it still persists. It's just getting more infrequent. Just like breaches in cybersecurity are becoming less frequent, but we're still occurring, they're still coming along at a pace. [00:17:13] Speaker A: So maybe just following the good side of AI, I know that people are adopting it in good ways. So is there anything like any sort of insights or observations you'd like to share with how businesses are adopting AI? [00:17:26] Speaker B: Yeah, I mean, I think that what AI is doing is it's maximizing the contribution of individuals. I think were able to steer to higher value rather than necessarily low end monotony. And that doesnt displace people. What it does is it effectively makes them more productive, possibly more creative, possibly more efficient. And to me thats good, thats continued evolution. We havent seen the capacity of people yet, and therefore I have 100% confidence that people matched with AI is going to contribute a lot more than just people or just AI. To me, it's fascinating to watch people evolving around what these possibilities are. [00:18:17] Speaker A: Now, I know you have spoken at your fireside chat, maybe share a little bit more about what that was about. [00:18:25] Speaker B: So what we're trying to do is make certain that one, we're informing folks about what we're doing as ISACA while our members are engaged. I think there's always a question about what it is that we're doing in order to take it. So we want to make certain that we're having a great conversation with our members and that they know the direction that we're heading and why we're heading in those directions. We've certainly got some compelling growth initiatives scheduled for 2025. We're certainly going to continue to build on making certain that we're bringing more members in, that we're bringing newer people, not only academic institutions, but making certain that we've got a good on ramp into our membership. We also want to make certain that we're adopting to this AI environment and that we're thinking through our certifications or training and our knowledge bases in a productive way. But we're building infrastructure too, that make it easier for members to find what they need all along the journey. And given that we handle all of these domains and work somewhat unique in terms of the domains that we handle as an association, that allows for a lot of customizations too. So we're thinking ahead on what may be required of our members and always with a common backdrop that we want them to be the experts, we want them to be the go to agents within their enterprises to be able to drive the solutions to these problems that persist. [00:19:59] Speaker A: Joining me now in person is Chirag Joshi, founder, seven real Cyber. Chirag, thanks for joining me back on the show and welcome. [00:20:07] Speaker C: Thanks, Kibi, always such a pleasure to see you. [00:20:09] Speaker A: So, Chirag, I'm aware that you presented today, so please tell us, what did you present on? [00:20:13] Speaker C: So my talk is going to be on the Security of critical Infrastructure act and the practical challenges and implementation that has come about with the Sochi act, as we call it. And it's also beyond that, right? In terms of the lessons that organizations can glean when it comes to looking at their cybersecurity postures in a more holistic manner. So that's our talk. But beyond just cybersecurity, we're also going to cover a very important part of Sochi that is supply chain and physical hazard. [00:20:41] Speaker A: Okay, so when it comes to critical infrastructure, do you think it's one of those things that isn't like front of mind? It sort of feels a little bit relegated in terms of, even from a media point of view, there's not a lot of content out there about critical infrastructure, and it is in fact critical because we rely on it every day. But it just seems like one of those things that maybe isn't as prominent as perhaps it should be. What are your thoughts on that? [00:21:03] Speaker C: It's going to be an education journey. So the act itself requires the regulated entities. There is a certain number of entities who are covered by an industry is covered by the societ. Within that there is a specific cohort who are deemed as systems of national significance. And those are the, you know, so the requirements that apply change based on how critical you are to the entire economy, to our society and stability. Now to your point about we don't hear as much. I think part of it is government's trying to change that. They're trying to engage more with industry and educate them. Now there are some organizations who might be critical infrastructure but are going through a maturity journey. And there is an aspect of what I call is even if you put socky aside for a second, go back to good cyber risk management practices, there are certain frameworks that are advocated as being good practices which most of our listeners might be familiar with if they work in cybersecurity or associated. So then this cybersecurity framework, essentially the australian energy sector cybersecurity framework, AE's CSF. So there are some which. That's why I almost put the Sochi aside, good cyber risk management and then Soci can be part of your overall hygiene and then the requirements that come above and beyond that can be accommodated based on your business. [00:22:21] Speaker A: Okay, do you think. I'm going to ask a real basic question. Do you think people in general just get the Sochi act? I mean, there's a lot. Have you read it like it's like 400 plus pages, like it's a lot to wrap your head around, right? [00:22:31] Speaker C: I've had the pleasure of enjoying that read KV. But you're right, there is the aspect of there's a lot there. And I tell people not everything applies to everyone. Right. So I think there's an education aspect at a very basic level, people need to get a handle of what their asset inventories are, which key controls they're implemented for, things like incident response planning, incident response notifications, vulnerability management. So it goes back to not groundbreaking stuff. It is still good practices now that are required. So there are certain aspects of socky act which were initially when it was talked about a lot, which is government stepping in to assist when required. That created some concerns, but I think by and large the industry has largely found its feet with regards to working with the government on that front. So yeah, look, it's by design. It's not a very exciting topic. I just think if you put confusion aside, go back to proper cyber risk management, that'll solve most of your problems. [00:23:31] Speaker A: Joining me now in person is Francine Hu, director from KPMG. So Francine thanks for joining. Welcome. [00:23:37] Speaker D: Thank you. Thanks for having me. [00:23:39] Speaker A: So I know that you've recently had your session, you presented on AI, but tell us, what did you present on? [00:23:46] Speaker D: So I did present on AI. We did talk a little bit about, you know, the war stories, what we think that our concerns are fly bored and people at the moment. But I think the most important part is how it actually interacted with how we protect people and planet and just talking about my experiences and what I'm seeing and how we're helping people build that infrastructure out. So as they develop, what are the questions to ask? What are the measures that they can implement? [00:24:12] Speaker A: So in terms of what you're seeing now, in terms of adoption with, you know, would you say people are still trying to get their head around, like how to leverage it properly in organizations? [00:24:22] Speaker D: Yes, I would say so. I'd say it depends on people's appetite for how much they're willing to fail, how much they can afford to fail. It's at different maturities. It's funny because AI has been around for a long time. I think Gen AI has actually just made it just more popular. So now the real questions are being asked, you know, about what is this? What are we doing? Are we happy about it? And I think culminate it with just everyone's awareness of how your personal information matters in here. It's just everything's come together and I think that's why there's a lot more awareness of the fact that it's around. What do I want to do and do I actually want to be on that journey? [00:25:02] Speaker A: Do you think people are happy about AI or Gen AI now as it really is emerging quite hard in the market? [00:25:10] Speaker D: It's an interesting question and we were, I've been around with the team actually hosting panel events around the country, and the best question that's been asked is who uses the Internet daily and hands up who uses it hourly and within the hour and hands are still up. And then it's who remembers when the Internet started? Like my hands up, that's showing. Ph, yeah, totally. Ding, ding, ding. I couldn't even make that noise properly if I wanted to. But we are so willing to do that and we are so willing to actually fly on planes with autopilot. And so we're so nervous about all the development now. I think it's because it's more the fact that people don't understand it and there is a whole lot of misunderstanding that it is Terminator. So I remember when I was asked the question, I wasn't at KPMG or was it another place? And I bumped into a friend who was in a bit of a tease, and we were doing this fire drill, and I'm like, are you right? And he's like, do you know what AI is? I thought, oh, my God. I knew this question was going to come to me. And I kept really quiet because we were playing blockchain at that time. And he goes, you're thinking of the Terminator, aren't you? I said, not gonna lie, yes. The image is in my head. And he goes, I'll make it very simple for you. And look, this is pre gen AI, by the way. He goes, AI is on steroids. Is a sushi. Is not a sushi. It's not. Is a sushi. Is a hot dog. So if you think about humans and how we make our decisions, we're just really fast at distilling. If yes, if no's, if can, if not. And we can do it all parallel, you know, and the machine just keeps moving that way. And that blew my mind because I went, that's what it is. And then I started thinking about how we orchestrate and engineer. You know, I was. We'd orchestrated ways of showing people the value versus you're going to waste a million bucks, by the way, investing in something you don't understand. And that's when it changed the landscape for me. So my experience with that was probably quite a few years ago, and I think a lot of people are going through that similar experience now. [00:27:20] Speaker A: Where do you think people get this view of the whole terminator thing from? Like, where did that, like, I get it, but, like, obviously, you know, you've got mainstream media out here that is sort of pushing that narrative. That's one thing. But when people do think about AI, they do often refer to pop culture Terminator. [00:27:37] Speaker D: I think it's because if you actually look at the stats and the numbers, like, you know, Terminator Jaws came out when the baby movements were around. Everyone is so scared of sharks, including me, for good reason. Terminator and, you know, robots and that whole narrative, early nineties, so we were all watching that. You know, we didn't have YouTube, we didn't have streaming devices. It was all about tv. It was all about movies. So it's etched in our brain. So we would think that that is the case. And, I mean, it's not wrong. It's just, it never got distilled, what that was, you know, and there's a lot of us that love Star wars growing up as well, you know, with the droids and everything. So there is this, I think, misconception in a lot of people's minds that when you look at robots, things that organically move, they say, into AI. What they don't realize, and this has been a great education process for me, is that you're looking at a whole series of automation, integration and data, a good user experience, and then there's a piece of artificial intelligence that generates from that. And in some of the very basic stuff, it's more the automation and the UI than it is the AI. [00:28:47] Speaker A: Joining me now in person is Jamie Norton, Osaka board of directors, who has recently been appointed. So, Jamie, thanks for joining and welcome, welcome. [00:28:55] Speaker E: Thank you. Have we. [00:28:57] Speaker A: Okay, so you've recently been appointed to the board, so maybe, you know, share what has Osaka sort of mean to you? I know you've got like a long tenure with, you know, the chapters, et cetera, but, you know, perhaps share a little bit more about your story. [00:29:10] Speaker E: Yeah, so I've been involved with Osaka now for nearly 20 years. You know, started off early career, you know, focused on certifications and, you know, attending training and industry updates. And it's really, Osaka has really been with me, I guess, for most of my career, being involved with the community, being, you know, certifications and more recently involved on working groups and now the international board. So for me, it started out very much about certification and training and helping me get experienced. But as I've become sort of more aged and become more senior now, it's just as much about the community and having good people around me and being. [00:29:47] Speaker B: Able to help the industry and give back. [00:29:48] Speaker E: So it's really been a good journey for me. [00:29:51] Speaker A: So in terms of last two days, there's been multiple sessions, like AI has been one of them. Obviously it's a concern for people, but with your background being the free size of the ATO, like perhaps, what are some of your concerns then for the cybersecurity sector? [00:30:04] Speaker E: Yeah, I think as we've seen here in the last couple of days, that AI is definitely a big one and it's very much an emerging threat. We can already see the potential challenges facing us, both from a governance perspective, but also in terms of just malicious actors using AI and being able to harness that to just increase scale and also, you know, sophistication, I guess, of attacks on individuals. But I think also at the same time, we still have this double speed happening in cybersecurity where basics are still really tough for a lot of organizations. So just getting that basic hygiene and doing the key things we need to do is still very much a challenge whilst at the same time we're dealing with emerging threats like AI and sophisticated threats. So it's why I guess our industry is so challenging to stay ahead of us. It's sometimes hard to do the basics and keep focus on the emerging threats at the same time. [00:30:55] Speaker A: So then on that point, you are right about the basics and the patch management. I always gone about that in my interviews. But we're talking about companies are still trying to get those basics right, as you would know. So now we're talking about AI, which is like obviously spawning into this new sort of area and new terrain for people. So would you say that perhaps like we need to be able to crawl first now we're sort of like running in the Olympics by the AI sort of, you know, that's my analogy for AI in comparison. What's your view then on that? [00:31:23] Speaker B: Absolutely. [00:31:24] Speaker E: I think you can't run before you crawl. So getting those basics right at least gives you a fighting chance to then look at the next level of sophistication and how that might impact. Unless we have our basics done, we're going to struggle to even combat some of these more advanced threats. Even the simple threats are going to take us out. So definitely getting those done, patching, as you say, multifactor, some of these kind of things, essential aid essentially from, from government. That kind of governance is really necessary, I guess, first and then starting to look, that's really an enabler, I suppose, for targeting more sophisticated threats. [00:31:56] Speaker A: And then with my previous discussion with Eric Prush, the CEO of Osaka who flew here for this event, he was sort of saying like no one really understands, like AI, like fully. Would you agree with that statement? [00:32:08] Speaker E: I think so. I mean, we're seeing some people starting to emerge that have certainly been involved with it now for a few years, but it is a really new, yeah, a really new phenomenon, unless you're an academic with coming out of the university sector. Most of us have only really had the last couple of years where we've just been paying attention to this and perhaps only the last three to six months where it's become more and more prominent. So I think we're all in a pretty similar boat. We're all sort of grappling with what this means. We're at a bit of an evolutionary stage where we don't quite know where this is going to go. So we're just trying to get the governors right on it, make sure that we're as prepared as we can be, but also the governance right for AI in our own organisations too, because increasingly security solutions and as an industry we're adopting AI to help combat what we think's coming. So just trying to keep it in the pace of it at the moment I think is the key initiative for us. [00:32:56] Speaker A: And where would you say AI is going? [00:32:58] Speaker E: Without a doubt, I think from a threat actor perspective we're going to see AI, we're already starting to see AI being used in terms of scams and the cybercrime element. Without a doubt that'll become mixed in with the bit of nation state as well. So we're going to see sophistication there that's going to. And a scale too. So we've got to see the scale of these attacks just increasing and increasing. Scams are already just ubiquitous. So I think that's certainly going to happen on the defensive or the sort. [00:33:25] Speaker B: Of blue team side. We're going to see AI. [00:33:28] Speaker E: Most vendors have already got something AI in their product, I think that's going to increase. And then the amount of decision making I guess in terms of what AI will do for us on a daily basis will grow as well. And so we'll have to start to balance those two elements. [00:33:43] Speaker A: Joining me now in person is Kate Rawlings, ISACA member and speaker today at the conference. So Kate, thanks for joining and welcome. [00:33:49] Speaker F: Thank you so much. [00:33:50] Speaker A: Now Kate, tell us a little bit more about what you discussed today. [00:33:53] Speaker F: Well, mysessions was called do the basics or beware. And it's looking at my time consulting and supporting organisations through incident response and it's really the lessons that I've learned, the ten lessons that I've taken away from supporting those organizations. It's the basics because there's some really practical, simple things in there, such as patching applications. Everyone goes, oh, but that's basic, you should just, everyone should just do it. But it's how to actually make sure you're doing it properly and doing it well and you've got full coverage. That's one of the lessons that we've talked about today. [00:34:27] Speaker A: So Kate, would you say we often in cyber talk about the basics, but the basics don't appear basic. It appears difficult and hard and complex. What are your thoughts on that? [00:34:38] Speaker F: Yeah, I would completely agree. I think basics are there because everyone knows they should be doing it, but getting it 100% right, 100% of the time is the challenge and an attacker has only got to get it right once. When you're on the inside of an organisation, 100% of the time, 100% of the coverage, and that's where it becomes challenging and problematic and difficult. Some of the other incidents and the lessons that I have learnt, technology alone won't save you. You know, it really is, as the conference title states, the combination of technology and process and people. [00:35:17] Speaker A: Joining me now in person is Richard Magalad, managing director at ITR Australia. Richard, after three years, you've finally joined the interview, although it is a small little snippet, but here we are. I'll take it. So tell us, what did you present on today at the conference? [00:35:34] Speaker G: Hi, KB. Yes, three years in the making. Good to be here and good to be at the Asaka conference. I just took part in a panel over on how to manage your third party risk. And we're going through some of the challenges that a lot of people in industry are experiencing right now with the recent high profile incidents with the likes of Medibank and Okta and others as well. It just reminds us that the third party does form part of your attack surface and therefore adequate management really needs to be put through just to make sure that we can trust that suppliers are doing the right thing for us and that we are also doing the right thing for our suppliers. [00:36:21] Speaker A: Yeah. Do you think in terms of, like, the whole risk around, you know, third party suppliers and supply chain, would you say people are now worried because they need these suppliers to make their business run? So do you think there is this fear or this anxiety to be like, well, kind of can't do my business without them, but also I'm now fearful if there's an outage or there's an incident that I'm now being impacted by it? [00:36:43] Speaker G: Yeah, there definitely is. There's this realization, especially after the last couple of years, because prior to these high profile incidents, we have always been using suppliers for decades now. And the thing is, we're always under the impression that our suppliers will always do the best for us because they have a motivation to look after us. They want to earn the income that they derive from looking after our business. But now that we're learning more and more, so there's some element of doubt is coming into all industries that we're all now starting to look at. What are the suppliers doing? Or are they doing enough? And there's definitely a bit of paranoia that's coming in there. The other aspect of that as well is some of the suppliers are then responding and doing the right thing by making the business more resilient. But at the same token, unfortunately, there's a small number of suppliers who are then starting to duck for cover and data was that we need to weed out because they're the ones that could be the next high profile breach as well. [00:37:51] Speaker A: What do you mean by duck for cover? [00:37:53] Speaker G: So, in some of the instances, I've seen some supplier cyber questionnaires that's been published, that distributed by the organization, then I've been in one of those panels where I then look at the response from the supplier and just looking at their response, knowing what I know in the industry, I then start to doubt this doesn't sound true because of what I know from a particular supplier. So then what triggers is we then go back to them and say, okay, well, thank you for the response, but we want evidence now. And next thing you know, they stopped taking phone calls. And then in a small number of supplies, they've actually terminated the contract. And in some of them, then they wanted to re contract and they watered down the terms and conditions. So that's a red flag, because why would a supplier of mine that's been dealing with me for ten years all of a sudden have left me after I challenge them for evidence? Or why are they watering down the terms and conditions with a lot more exit calls? And so. And, look, it's a good thing, because we now know what we didn't know. If we can weed out some of the bad operators, then that's better for the industry as well. [00:39:09] Speaker A: So what was the result of that? Which one was it? [00:39:12] Speaker G: So, some of the suppliers, we had one where there were some software suppliers that were providing some sort of not a CRM product, but product that ties in with CRM. And there were some privacy questions in there that was asked within the questionnaire. And when it borne out, could not test that they were looking after the privacy properly. And after failed phone calls, they said, you know what, the seat route. So at that stage, if your supplier does not communicate with you, the customer, after, like, two weeks, a month, then, you know, it's time to change suppliers as well. So some have left, whereas others are watering down. And in those cases, there's still negotiations that's happening. It's like kind of negotiations where I think there was a me at kupa moment where the supplier goes, look, we weren't doing as good as we thought we were. We still would like to service you, but I. We'd like to change our terms and conditions, which means that they're doing less work for the business, but so in the end, by the supplier doing less work for the business, less risky work, that means they continue to be a good supplier, a good third party supplier for us, but it also then allows us in the organization to then go at another supplier that is better, that manages the risk much better as well. [00:40:30] Speaker A: And would you also say that people have just ghosted, you never heard of them ever again? [00:40:34] Speaker G: Yeah, look, it's one of those, one of those things in the industry. And I think it's not just a cyber security industry or the tech industry for that matter, it's the good old. I think I just got caught. So time for me to go running for cover. So, yes, some suppliers have done that and sadly some of them have. Even in very small cases, it's the word worthy of close shop and then get reborn into another one. And then. So they've pretty much done that. So what happened is you'll have this business that was delivering goods and services to other companies, then they disappear. And then the new one, the board of directors, or rather the managing directors and the shareholders are the same. So different ABN, different name. In some cases they've even registered in different states. But then you do a background check and a due diligence on the owners through ASIC, which, hello, it's public knowledge. Then you see the same operators. So red flag for that one and some of the other ones that we found, which is actually a good outcome, is some of the small providers have actually been bought out through this process. So a larger mothership will call, just bought the small operators and put them under the wing. And because that the bigger company had good controls, good policies, good everything, then everyone's happy because we're managing the risk better. Again, it's interesting how the focus is not so much, we don't expect the supplier to be the best super secure business. We want the supplier to show us that they're managing the risk best they can. [00:42:18] Speaker A: So joining me now in person is Sam McKenzie, cybersecurity committee member from acs. So, Sam, thanks for joining me back on the show. Welcome. [00:42:25] Speaker H: Hi, good to be here. [00:42:26] Speaker A: So I know that you have had a chat today and you've presented something, so please tell us, what did you present on today? [00:42:32] Speaker H: I'm really interested in cyber security for critical infrastructure, and I had some guests join me on a panel talking about control rooms. And control rooms are a key part of how we manage and coordinate our critical infrastructure so that we can get good outcomes, better availability of those services. And those services might be the power grid, they might be the transport network, or other areas like airports and things obviously have control rooms as well. [00:42:57] Speaker A: Yeah. And I know that you and I did a deep dive focusing on critical infrastructure. So what do you think people sort of miss when it comes to critical infrastructure? Is it more so just they think, well, I can just turn on the light and the power works and I don't have to think about the mechanics of, you know, how it functions. Do you think it's that and people aren't actually focusing on, well, what happens if the thing isn't working and we can't have access to power, for example? [00:43:20] Speaker H: Yeah, I think, and even when I talk to people about it at social events, they don't really have an understanding of what it is. And I think the general population, you know, like, it's not really up to them to know. I think they're just expecting to rely on those services. Like you say, you turn on the tap, you want the water to come out, you turn on the light switch, you want the power to work. I actually grew up off grid for power and water. And so I have experienced that through my childhood, and it drives me to have a keen interest in making sure that people don't have to experience that. And so that's one of my passions around helping secure it, helping make it available, and supporting the public in not needing to worry about it. I think the challenge is that the threats landscape has changed over time, and the threats are actively targeting and attacking this infrastructure that we've got. And we've walked into a situation where the technology use across these most essential of our services has expanded significantly. [00:44:18] Speaker A: So in terms of your presentation, what would be some of the key takeaways that you'd like? People, and I know that not everyone could be here in person today. That's why we wanted to put together this interview to discuss. Maybe there were some learnings that people can sort of walk away with. [00:44:29] Speaker H: I guess the key thing that I'm trying to drive and through the conversations that I'm bringing people together across discipline, across sector, is for better security, better cyber safe outcomes in regards to higher availability, harder threat targets for the bad actors to attack. And so I would encourage people to collaborate across discipline cross sector because I think that's where better outcomes occur. [00:44:55] Speaker A: Okay, so joining me now in person is Wayne Rodriguez, security architect from Insignia Financial. So, Wayne, thanks for joining and welcome. It's wonderful to finally meet you in person. [00:45:05] Speaker I: Likewise, cursor. [00:45:06] Speaker A: So tell us you've got a bit of a journey with IsAC and how you got involved. So talk us through, what does that look like? And, you know, how did you get involved with ISACA? [00:45:14] Speaker I: Yes, definitely. So my journey started about twelve years ago, back in 2012. Interestingly, my dad at that time was a heavily involved member of Isaccop. He was part of the professional development subcommittees and part of the various director positions in there. He really encouraged me to just join on as part of a student membership. So just attended some dose, got my head around it and it seemed really human and really interesting. And then when he moved on to his presidency at the Isaka Melbourne chapter, we just got further involved there. So it's been about twelve years now, and got myself heavily involved, following in his footsteps, helping out the various other directors over here as well. [00:45:57] Speaker A: And so when you say followed in his footsteps, do you mean in terms of the trajectory or in terms of just, you know, being involved with the chapter, for example? [00:46:06] Speaker I: Pretty much both. I think he was an inspiration for my career as well as volunteership and giving back to the community. Just following on in terms of his career, really loved the aspect of cybersecurity and securing businesses and processes really piqued my interest and hence go straight into cybersecurity. And in terms of volunteering and giving back to the community, he's always been a core part of the community, just always giving back, and I creating various openings and entries through his leadership. So just following on to that, just trying to give back wherever I can as well and helping out. [00:46:41] Speaker A: So you said before giving back to the community, so I wanted to focus on that a little bit more. So there's obviously got Osaka, you've got these other sort of independent sort of groups and memberships and all sorts of things. But what does giving back to the community look like for you? [00:46:56] Speaker I: I currently have about ten or so years of experience in the field. I definitely believe that I can help the new entrants into cybersecurity, I can help the current candidates and professionals in upskilling and moving on to the next stage of the career. So just giving them some of that guidance and insight into my personal journey. Any tips and tricks involved in that, and just sharing some professional advice as well along the way. [00:47:23] Speaker A: So do you have any tips or tricks then you want to leave our audience with today? [00:47:26] Speaker I: I think definitely events like this definitely help. They not only spark your interest in certain areas, they also provide you an insight into aspects which you might not even be aware of. Not only that, you get to meet wonderful people, this always helps build that. [00:47:45] Speaker A: And there you have it. This is KB on the go. Stay tuned for more.

Other Episodes