March 28, 2025

01:01:54

From Microsoft AI Tour 2024 – KB On The Go | Mick Dunne, Ben Lamont & Helen Schneider, and Leigh Williams

From Microsoft AI Tour 2024 – KB On The Go | Mick Dunne, Ben Lamont & Helen Schneider, and Leigh Williams
KBKAST
From Microsoft AI Tour 2024 – KB On The Go | Mick Dunne, Ben Lamont & Helen Schneider, and Leigh Williams

Mar 28 2025 | 01:01:54

/

Show Notes

In this bonus episode, we sit down with Mick Dunne, Chief Security Advisor at Microsoft, Ben Lamont, Chief Data Officer & Helen Schneider Commander, ACCCE and Human Exploitation for the Australian Federal Police and Leigh Williams, Chief Information Officer, Information and Technology Executive at Brisbane Catholic Education. Together they discuss the function of the Customer Security Officer team, how the AFP is using AI to protect Australia and its people, and the impact AI has on education.

Mick Dunne heads the new Customer Security Officer team across Asia who are part of a global team for over 40 former CISOs, CTOs and deeply experienced SMEs. They are focused on providing trusted, deep expertise and advice to customers, Microsoft area leadership and to feedback key strategic investments and product roadmap. Prior to Microsoft Mick was the CISO at AustralianSuper, bringing a long history as a security leader and also one of the first organisations to adopt Security Copilot.

Ben Lamont is the Chief Data Officer at the Australian Federal Police (AFP). In this role, he is responsible for developing and implementing the AFP’s technology strategy and data management initiatives. Ben’s work focuses on addressing capability gaps and leveraging opportunities to enhance the AFP’s operational effectiveness. His leadership ensures that the AFP remains at the forefront of technological advancements in law enforcement.

Helen Schneider is a Commander with the Australian Federal Police (AFP). She leads the Australian Centre to Counter Child Exploitation (ACCCE), which focuses on combating online child sexual exploitation and abuse. Commander Schneider has been instrumental in coordinating significant operations, such as Operation Bakis, which led to the arrest of numerous offenders and the rescue of children from harm. Her work involves collaborating with both national and international law enforcement agencies to tackle complex and sensitive cases, ensuring the safety and protection of children.

Leigh Williams is the Chief Information Officer at Brisbane Catholic Education. With a career that began in teaching, Leigh has held various leadership roles, including CEO, Executive Director, and COO. She oversees digital, information, and IT infrastructure for hundreds of locations and over 13,000 staff. A passionate advocate for digital innovation and education, Leigh is a published researcher and has led keynotes and workshops globally. She holds multiple post-graduate qualifications in Education, IT, Leadership, Management, and Business.

View Full Transcript

Episode Transcript

[00:00:15] Speaker A: Welcome to KB on the Go. And today we're coming to you with updates from the Microsoft AI Tour on the ground at the International Convention center here in Sydney. Listen in to get the inside track and hear from some of Microsoft's global executives. You'll get to learn more about the exciting SFI and MSTIC cybersecurity solutions in depth and you'll be hearing from a select few Microsoft partners. We'll also be uncovering exactly how the Australian Federal Police are leveraging AI to detect crime to keep people in our community safer, plus much, much more. KBI Media is bringing you all of the highlight. [00:01:01] Speaker B: Joining me now in person is Mick Dunn, Chief Security Advisor, Asia Pacific for Microsoft. And today we're discussing the function of the Customer Security Officer team. So, Mick, thanks for joining and welcome. [00:01:11] Speaker C: Thank you very much. Thanks for having me. [00:01:13] Speaker B: So you're new to the role Walkmaster. You've come from a customer side, now into Vendorland. Yeah, I want to start there because quite interesting observations you shared with me prior to to our chat today. [00:01:25] Speaker C: Yeah. Oh, for me it was thinking about my career and where I was going. So I'd been in a CISO role for the last five and a half years and then prior to that I'd spent a long time in senior security roles. So the question for me was, do I continue where I was or do I think about another CISO role in a new organization? And to be honest, that felt a little bit like Groundhog Day. I wasn't suffering from burnout or anything like it, really enjoying my role. But then this Microsoft opportunity came up and I thought it was a great opportunity to use my skills and experience in a different way. But then also with the timing around the Cyber Safety Review Board report, Microsoft's commitments around the Secure Future Initiative, I thought what better time to move into an organization that's really, really central to the global ecosystem? And I thought, you know, this is a chance to just do something quite different, but to be in an organization that's really, really central to what's going on. [00:02:22] Speaker B: And it's good because like I said, you've come from a completely different pedigree. You bring in a different dimension perhaps, and a different perspective, which adds significant value that you may understand more nuanced things from being on that customer side, perhaps. So I want to talk a little bit more about the new Customer Security Officer or CSO team and what does that actually mean? [00:02:45] Speaker C: Yeah, so it was really interesting to me in the recruiting process that what Microsoft were actually after was people with Experience and particularly to have that empathy for a customer. So we've been in the hot seat. We know what it's like, we know what the customer's trying to do, we understand the customer's language. So the global team is new. However, there's been, historically there's been similar roles within Microsoft for quite some time. And speaking with Brett Arsenault yesterday, he said, well, actually he started a similar role probably about 20 years ago. So it's one of those evolutions where Microsoft recognizes that where we have people like me in place, that there's stronger and deeper connection with our customers. And of course that enables the Microsoft sales machine to go and do what it does. But we also play a role in building that long term strategic trust, thinking about, you know, what are our customers doing, what are their tactical challenges, what are their more strategic challenges, and then being able to sort of bring some of those insights back into the broader Microsoft machine. But it's just really, really critical for us. And what I find is when I go and speak to customers that there's an advantage. I know many through the community, but then also when you sit there and have a conversation, you shared the fact that we're not renumerated by sales, which is pretty rare inside Microsoft, then customers suddenly like, great, I'm not talking to a salesperson, I'm talking with someone who understands the challenge. And then we can actually have some really deep and meaningful conversations. [00:04:16] Speaker B: Okay, so there's a couple of things that's really interesting that I want to get into. So you're not talking to a salesperson. So are you saying that people instantly disarmed? [00:04:24] Speaker C: Yeah, look, to a degree. And obviously, you know, they're not just disarmed by the fact that I'm not a salesperson. You know, they also know from my background, you know, usually part of the introduction is, you know, former siso. [00:04:36] Speaker B: Sure. [00:04:36] Speaker C: I joke about being a recovering Siso and you know, look, I understand the challenge. My area that I look after, you know, in Australia and New Zealand is focused around what Microsoft calls enterprise, commercial. So all our major organizations and financial services. So I've worked in, you know, those sectors. I worked in a range of sectors. So to be able to go and have those conversations with the background that I've got is, you know, is really, really valuable. [00:05:02] Speaker B: And you said before, Mick, that with having your background, you inject some of the insights back into your words. The Microsoft machine. [00:05:10] Speaker C: Yeah. So part of the expectation around the role is coaching internally. So we will spend time and one of Microsoft's part of the Secure Future Initiative is the expectation that everyone in Microsoft is seeing security as priority number one. People's performance is measured on that. So part of the role we play is actually educating people about what is the Secure Future Initiative. We also then helping teams understand, you know, what is the day in the life of a ciso? What does it look like for their teams, their heads of or their general managers, their leadership team, and helping them understand that maybe I don't need to go and soak up the CISOs time. Maybe there's other ways to access the organization. How do I go and have that conversation? How do I do that in a meaningful way that resonates with the customer organization rather than turning up and saying, hi, I'm from Microsoft, I'll solve all your problems. And it doesn't always land well. And by the way, you know, I've been on the other end, I know what that conversation sounds like. So that trust, that empathy is really, really critical. And often we'll interject with a team when they say, hey, I'm about to go and approach a customer, I'm going to do it in this way. And I'll go, that's not going to land well. You need to think about this. You need to understand the SISO perspective and then reconsider how you might go and engage with that organization. [00:06:28] Speaker B: Okay, so a couple of things in here that I want to talk through. So you mentioned the word empathy. [00:06:32] Speaker C: Yep. [00:06:32] Speaker B: So what does empathy look like in your eyes? [00:06:35] Speaker C: I've said this internally to Microsoft. My measure of success is if I'm enabling a CISO and their team in a customer organization to be more successful. So that might be through the Microsoft capability. It might be just through sharing some experience or a way I've approached a problem. It may be through an introduction. So the team that I'm part of is a global team. There's about 44 of us globally. We've come out of CISO roles. There's some former CTOs, CIOs in the group as well. We've had some people that have had deep experience in public policy. So we've got this range of skills and a range of industries in the group. So again, sometimes the power is in an introduction to say, actually, I don't really know your sector, but I know someone in the team who does, or even helping with introductions back into the central Microsoft CISO organization, or even direct referrals into our product engineering groups where they can talk with a deep subject matter expert. So often I think that's the way to really, really help. As I said, we're not sales renumerated. If I'm talking about a particular product, then I'm not really doing my job. We're there to talk about the problem at the high level and then through introductions, through engagement, referring people, often that's a way to give some organization the help that they need. [00:07:54] Speaker B: So from your point of view, what do you think is that people have perhaps missed in the past on the vendor side? Because you said before, like, you know, what does a day in the life of a CISO look like? You know, perhaps people don't have your background, so they may be acutely aware that they're missing something. Is there anything you can share? [00:08:10] Speaker C: I think it's interesting and Covid really highlighted it that, you know, there was a lot of pressure on sales teams, so there was a lot of direct reach out. You know, cold calling potentially reached a peak. I was getting phone calls in the middle of the night from vendors that are offshore that were trying to access the market, which is pretty offensive if you ask me. And then you'd get the call from the vendor that you'd never spoken with before and they'd be telling you that they got the solution to your problem, but they don't know you, they don't know your organization, they don't know your priorities. They're assuming that you don't have a plan. So that, you know, the fact that I'll turn up tomorrow and say, hey, I can sell you this, that'll solve a problem, completely ignores that. I've got a budget, I've built a plan, I've sought funding for particular reasons. I'm trying to close control gaps, I'm managing risk. So this idea that I can turn up as a vendor and I'll sell you something in the next couple of months because I've got a sales deadline or a target completely misses the point. So understanding, you know, planning cycles, realising that, you know, I've got a plan that's very clear. I told some vendors in the past, you know, yep, I like what you're offering, but it's in year three of my plan, so come back in two years and we can talk. Which is pretty hard for some vendors to hear, but the industry is maturing. The expectations on a security group are higher than ever. From a business perspective. We don't have unlimited budgets. We've got to be really clear about our strategic plans, what we're addressing and why, where that's adding value to a business and that doesn't always align to what a seller wants to hear when they're knocking on your door. So I don't know if I've answered the question there, but it's always very interesting when you get these calls and it's like doesn't land well. So those strategic insights and what we can offer and thinking about our organization, we're not really focused on this current financial year. We will get rolled out in support of some tactical initiatives. We might help the biggest Microsoft machine get access to a customer, but that's not our priority. It's about building those longer term relationships that if Microsoft was to see a benefit, it could be 12 or 18 months away. [00:10:16] Speaker B: The interesting thing that you share with the CSO team is that you said that the people that they've hired like yourself, have actually come from industry or government or circumstances somewhere that adds that dimension perhaps that maybe other vendors don't. So what do you now see moving forward with the team? I know you said it's nearly established, but is there anything you can share moving forward? As we enter into the 2025, it's. [00:10:40] Speaker C: Actually been a real joy to actually join the team. We've all got a level of imposter syndrome. I had the opportunity to go to Redmond for an onboarding thing and meet all these people in the room from some Fortune 500 companies, you know, massive companies all around the world and everyone's been trying to solve the same problem. So that was really good. And then to hear the different perspectives in that community was so helpful and the depth of experience across the group is so valuable. But then the ask is that we challenge Microsoft. So you know, we challenge the perspectives that Microsoft are working on, challenge the assumption. So Microsoft has deep relationships with many, many customers. We've seen examples today of reference customers up talking with Judson and the like. But bringing our own perspective into the organization, making our own connections within the organization and challenging the thinking of, you know, a significant software vendor in the world playing a critical role in infrastructure, by the way, at a critical time for Microsoft in the world. From a security perspective, it's a great time to go. You know what, you could think about this slightly differently and I think that's the opportunity that we have. [00:11:50] Speaker B: So how do you go about challenging a big machine like Microsoft in a way that is conducive to perhaps the culture or people not maybe feeling offside by. Hey Mix, just come in and he's saying this. What would be your way to go about that? To find that Balance, perhaps. [00:12:08] Speaker C: Yeah. So we've got part of our role and it's called out is that we're expected to either bring our own observations back from what do we see is missing in the market, what are some of the use cases that aren't being fulfilled. And likewise, when we're out there talking with our customers, we're getting feedback from all the time. So it's an expectation of our role that we bring that feedback in and then we will take that feedback through to product engineering groups or into the office of the ciso and then any work is going to go through a prioritisation about, you know, is this a feature request? Where does that fit into our priorities? So we're still, I suppose in that not quite storming and norming phase, but we're absolutely thinking about, you know, how do we bring that value to the organization culturally. Microsoft is really open. I've been surprised about how good the culture is, how open the culture is to feedback and then coming in at the level we have, we've been treated really, really well in terms of, you know, we've been brought on through our experience. There's a value afforded to that experience and people want to hear from us and they want to learn from us. So that's been great. But we're still sort of shaping up how it's going to work and we'll refine that over time. But the function's probably, I'd say it's 12 months old. Although many of us have come on, you know, I'm six months in, Sean, and we've still been hiring since then. [00:13:28] Speaker B: Why are you surprised? [00:13:31] Speaker C: Because culture is hard and big organizations will have subcultures that might exist around a team or might exist around under a particular division or leader. So it's exceeded the expectation. And I came in with pretty open expectations. I've never worked inside a vendor before, but to see the level of effort and attention that they give to culture, the number of mandatory training courses I've had to go through that some are repetitive and what I've done in other large organizations, but then some. I've learned so many new things coming into this organization and you know, the focus around diversity, inclusion, the focus around wanting people to be heard, you know, Satcha's focus around growth mindset. Sure. It's actually real. It's not just the glossy brochure where that goes out in the public statement. You come inside and you see it every day. And that's been quite eye opening. And the fact that a major organization is doing that. And this is where, you know, referring to the Secure Future initiative and the cultural change around security, every organization is thinking about, how do I improve security culture. Microsoft is doing it at a scale it's never been seen before. They're doing it in a way that maybe hasn't been tried before. And when I'm talking with customers, there's a high level of interest to understand, well, what is Microsoft doing? How are they approaching this? How might my organization think about trying to drive something similar? So interesting. [00:15:01] Speaker B: Curious to know, you said what's missing in the market? So what is missing? [00:15:07] Speaker C: Simplicity is missing. Sure. I think that's the real challenge. So again, if we go back to your question earlier about the vendors, everyone's got a solution to your niche problem and they're coming in and, you know, with this little tool, I'll solve that. [00:15:20] Speaker B: You know, Point solutions. [00:15:21] Speaker C: Yeah, point solutions. And then you're left with this integration challenge. [00:15:25] Speaker B: Sure. [00:15:25] Speaker C: So I think, I don't think that there's much missing. There's always going to be a new solution to the emerging problem. But what is missing? And you know, a number of organizations, Microsoft included, are sort of going down this platform organization approach. Sure, it's not going to solve every challenge, but I think we've got to work towards simplifying the conversation around security. Certainly when we talk with our business leaders and our boards, you know, they don't want to hear about the complexity of the problem. They want to hear things in simple terms. So the simpler we can make this problem without dumbing it down to nothing. [00:16:06] Speaker B: Sure. [00:16:07] Speaker C: I think that is the challenge, but I don't think we want for much. I think one of the things that I call out is that we do talk about cyber burnout. It's a real challenging industry, but in some ways we've got all the things that we're asking for. So years ago, we weren't getting the support, we weren't getting the executive level engagement, we weren't talking to our boards, we weren't getting funding to a degree. Now we've got all of those things and in some ways it's overwhelming. But we've got to think about how do we change our language, how do we communicate in simple terms, how do we take advantage of the opportunity that we've got and make the most of that on behalf of our organizations. And that's somewhat of a new challenge. [00:16:48] Speaker B: In terms of customers in Australia. That's the same sort of chatter that you're hearing. I interviewed someone on the customer side, talks about Platformization, reducing tools, complexity, more integration, interoperability. Is that the same sort of thing you're hearing across some of the customers that you're speaking to in Australia or Asia? [00:17:05] Speaker C: Yeah, absolutely. And cost, cost is always a challenge. So with the global economy the way that it is and some uncertainty, security teams are still being asked to, you know, manage costs and be more effective. So some of the platform plays. And by the way, you know, if you're a major organization, and I heard, you know, Brett Arsenau said this yesterday, there's no expectation that you would use Microsoft end to end for absolutely everything. You know, that's not a reality. In fact, you know, we don't even have capabilities in every requirement. But depending on where your organization sits and your level of maturity, you know there's value to be had from looking that platform approach, you take away many of those integration challenges that really bring organizations unstuck. And if you can simplify your environment, then that means that you can simplify the training requirements for your team. You can make their life easier. So if you think about it from your pure, your team perspective and your people perspective, how do you make their life easier? How do you allow them to focus on really what are the challenging problems rather than the day to day mundane stuff that cyber teams spend a lot of time on. And it doesn't allow them to get to those higher order problems. So not to jump to AI, but there's a bit of excitement about what we can get out of some of the technologies that are coming along that can shift the way that we work and move the humans to higher level activities. [00:18:25] Speaker B: Costs a little bit more. So we spoke before, like, obviously people have got all these point solutions that perhaps they're not integrated, they're not being leveraged properly. It's not end to end. Now I know like you said, you can't get the one vendor that does everything. Maybe you could reduce it. [00:18:39] Speaker C: Yeah. [00:18:40] Speaker B: But would you say there's a lot of money being spent on just point solutions that aren't maybe helping at all, Aren't moving the needle? [00:18:46] Speaker C: Well, yeah, I mean, often you'll talk with teams and you'll find that they've got a capability and then you tease into it about, well, how much of this capability you're actually using, Are you able to use it proactively, are you using it reactively, are you using the full capability, where are the overlaps in that capability with other products you might have? And when you start digging in, you'll find out that maybe we're not using all the capability, maybe we're only using it reactively. So really, when you're talking, and I used to do this with my team all the time, challenge about what are you actually using? But then I think the thing that we often forget is that there's an overhead with every vendor. So if you're in a highly regulated industry, your vendor governance, your vendor oversight, but even go back when you're doing your market scan and you're looking at a million tools to find the Rolls Royce tool, then you've got to go through legal and procurement, you've got to do a negotiation to get there. There's a cost that we often don't quantify in all of that effort. Whereas if you've got the ability to use some of your platform vendors and there's others outside Microsoft, then you've got an existing contract. So then you can sort of limit the activity to, well, actually, I want to look at the capability of the product. Is it good enough? Does it meet my use cases? Yes, it does. And then you've got that simplicity from that legal and procurement process. That means instead of maybe waiting 3 months in some organizations or even 6 to 12 months in others before you can go through all those governance hoops to even start to deploy a capability, you can actually move to close down a risk exposure in a quicker way. So I think that's a real challenge. And often security teams aren't always recognizing the overhead that comes in the background. That does cost an organization money and may not directly cost the security function, but it's certainly a business cost. [00:20:32] Speaker B: So, MiG, we are running out of time. So just to close off, is there any sort of closing comments or final thoughts you'd like to leave our audience with today? [00:20:39] Speaker C: Look, I just. I still think it's just such an exciting industry and when I talk about, I mean, security specifically, we've got a great challenge. As much as I've stepped out of the CISO role, it was a job that I love. I love the people in the industry, but I also love that we can do as security leaders to help grow those people, to help them think about the problem on a. On a wider scale. So for me, that's the excitement. And now I get to do that, I suppose, on a larger level than just doing it in my organisation or sharing across, you know, my former industry sector. Now I get the chance to go talk to different people in a range of industries. I get to learn from them. Sometimes I get to drop a nugget that someone really values and that's nice, but I suppose it's just that ongoing learning journey for me. So I'm having fun, but I'm getting to see a whole bunch of things and obtaining a perspective that I didn't always have before. [00:21:29] Speaker B: I personally appreciate your perspective. So thank you so much for your time. I really appreciate it. [00:21:32] Speaker C: Thank you. Enjoy the chat. [00:21:39] Speaker B: Joining me now in person is Ben Lamont, Chief Data Officer of the Australian Federal Police, and Helen Schneider, Commander ACE and Human Exploitation, also at the afp. And today we're discussing how the AFP is using AI to protect Australia and its people. Ben. Helen, thanks for joining and welcome. [00:21:57] Speaker D: Thank you. [00:21:57] Speaker C: Thank you. [00:21:58] Speaker D: Good to be in. [00:21:59] Speaker B: Okay, so, Ben, I want to start with you first. So walk us through, through how the AFP is using AI to protect Australia and its people. [00:22:06] Speaker D: Yeah, look, I think giving a bit of context is probably the easiest way to start. We've got a huge amount of data that's in front of us. We've got a huge amount of jobs that are coming into the afp. So really we have no choice but to lean in because it's beyond the human scale and becoming more and more beyond the human scale. So AI is really the solution to a lot of these issues that we're facing with the amount of data that we're dealing with and the complexity of that data. So we're leaning pretty heavily into AI doing it in a responsible and ethical way because we police by social license. So it's really key that we do that. But we're using AI across our business, so mostly for lower cognitive tasks like translation and transcription, for processing large amounts of video that we've collected lawfully, and for our telephone intercepts and others. So I think really key for us is that we're pushing it across all of that area where we've just got a deluge of data and just need to have some processing of that data and then having the human in the loop of that process. And I can go into more detail about that, but really making sure that the prediction that an algorithm makes is separated from the decision of a human being. [00:23:15] Speaker B: Before I turn that over to you, Helen, I want to go back to you. Ben, just for a moment, is typically, the people that I interview are businesses. You know, I previously worked in a bank. You know, it's one thing that when you lose your money, for example, you can get it back, but when you're dealing with the work that you're both dealing with, it's a little bit different. There's a lot more Risk you're dealing with, you know, people's lives. So how does that sort of factor in when you're talking about, you know, the prediction side of things in terms of any AI potentially hallucinating, coming up with something, verifying to make sure. Well, it is that, you know, does that make sense with the, you know, the response I'm getting? Talk me through that. [00:23:49] Speaker D: Yeah. So that's a really key component to this, is we have to have assurance in those processes. You know, we're very good at using scientific methodology in our forensic area, for example. And so taking that same type of conceptual process and building it out so that generative AI, we wouldn't use that where there's an absolute key risk because it could hallucinate or it could be that we're false negatives and missing stuff. Stuff. So we would use more of a traditional kind of algorithm there, that we can have way more assurance on the inputs and the outputs and the work that that algorithm would do against that data set and then going back to the original data set so that that kind of lineage of where we may transcribe a telephone intercept, for example, we want to make sure we can go back to the original source and listen to that before we make any decision on having an impact on someone's liberty or an outcome of a case. [00:24:41] Speaker C: So. [00:24:41] Speaker B: Because I guess it's not like, you know, as people would know, you don't just sit back and go, okay, well, it's going to do it all for me. But how I. What was coming in my mind, Ben, as you've been speaking, would be, it's kind of like a conveyor belt. It's sort of. It's making that a lot faster by the time the product comes out. It's just increasing that production line. Because you're leveraging the AI and the co pilot to do that. [00:24:58] Speaker D: Yeah, exactly. And without that, it is to the point where we would need thousands more people to actually allow that to happen. So this is actually getting. Giving us a force multiplier and giving us an ability to look at more data that we wouldn't have been able to get through otherwise. [00:25:14] Speaker B: So, Helen, I want to flick over to you now, and I want to discuss how the work with AFP is addressing challenges related to Genai and deep fakes, which I want to get into a little bit more with you, but also specifically to what you've been doing on the ACE initiative. So walk us through that. [00:25:31] Speaker A: What does that look like? [00:25:32] Speaker E: Well, just to give you some context, the Australian Senate accounted, you on exploitation of the ACE really leads to the national online child sexual exploitation referrals that come into Australia. And we coordinate those now to our state and territory police and our own members from the A and B. So as Ben has described, the data we're seeing there has massively increased. In the last financial year, we received just over 58,000 reports into the ACE and that was an increase of 18,000 awards from the previous financial year. So as you can see, that data that we're having to deal with, and it's not just volumes, but it's really increasing for our investigators and capabilities. So the, the issue is that we're starting to really see AI generated or a AI moderated shoulder material as well. [00:26:28] Speaker B: Okay. [00:26:29] Speaker E: There's some real, I guess, challenges for our victim identification capability where we don't want to be wasting time on looking at images where there may not actually be a real child in harm and we're not then looking at the images where there are. So with the photorealism of AI, that is one of our risks. So whilst AI poses a criminal threat for us, it also can be a solution to some of our problems. So we're looking at partnering with immigration industry in relation to how can we improve our processes to deal with that scale of the model that comes in. And we're also looking at how we can save our investigators having to look at volumes and volumes of, you know, important materials as being strategic to look at tools like AI tools, that exposure program, because I help them process material with badly cost volume of it and potentially help us intervene disruptors quicker in the process of arms. So if we can predict behaviors that might be equivalent to online grouping or something like that, and then we can disrupt that. There's all opportunities for us for the deployment of AI tools. But as we said, we police by social license. So we do have strong privacy in Australia and we really work really closely as a law enforcement agency to make sure that we're employing responsible use of technology and emerging technology. So we've got to make sure that we still always maintain that trust of our community in relation to how we use technology. So these are our current environment at the moment and it's a complex one. [00:28:24] Speaker B: So a lot of people I've interviewed in on my show, this one that you're on today, they've sort of, I've asked them, like, you know, how are we managing deep fakes? Like, is it, you know, from a social perspective, is it? People now try to point at the, you know, Facebook and meta and friends, but Then like, well, we can't monitor it all. There's too much. And to your point earlier, you know, people sitting there manually looking at it does quite a lot of physical damage to people. So how can people start to discern itself, AI generated or not AI generated? Because again, I mean, you guys have, obviously have an investigative background. You could probably say, well, you know, it looks suspicious because of these reasons, but to the average person, they don't have the capability or the. Now, like, you both have to be able to discern that. And equally, it's quite exhausting that if you had to look through every single image just online to say, is it fake or not fake? [00:29:14] Speaker E: I think, you know, some, some of them, obviously. [00:29:17] Speaker B: Sure, of course, I think, you know. [00:29:19] Speaker E: We'Ve all been on social media. That's probably a deep thing. I think for us, a key part of the work we do through the ACE is all around prevention, particularly because we see a lot of our child in Australia, you know, they're on social media and, you know, we can see online child sexual exploitation occurring, you know, through things such as financial sexual, where the use of, you know, manipulation of photos that are quite benign are being used to create criminal intent in relation to human activities. I think, you know, one of the big things for us is making sure children really understand how these accessible technologies that they use themselves can be used with various purposes. And the. And I guess, you know, that critical thinking around, you know, the impact of that and one of the things that we do a lot to talk about is the fact that people might not realize. But if you use, you know, an AI tool, for example, to turn a completely benign image into a sexualized image of a child, and it's still child abuse material under Australian law. [00:30:31] Speaker B: Right, Okay. [00:30:32] Speaker E: I think, you know, sometimes people might think, well, if I generated it myself. [00:30:36] Speaker B: Or just me or, you know, I sketched it, that's another one I've heard. Yes. [00:30:40] Speaker E: So I think it's about, it's really important. Prevention, education and awareness is becoming a real capability in its own right as we tackle some of the challenges with technology. [00:30:50] Speaker B: And would you both say because of the, you know, the AI and technology like that conveyor bell getting a bit faster, that could be the change to preventing a lot of these crimes. It could be that couple of seconds perhaps that, you know, could make a difference? Is that what we're going to start to see more of in terms of, you know, the leveraging AI and how you're going about it, etc. Do you have any sort of Predictions. [00:31:14] Speaker D: Then on that front, I think from that point of view, the we have a criminal kind of dysphoria who are very early adopters of technology. They have been through most of society and that continues. AI is no different to that. So we need to counter that with AI because the speed and harm that can be done because you just can with AI by criminal use, they have a longer reach and a shorter turnaround time. So we have to deal with that with AI. So that means we need to lean in on it and start countering it that way. And I think going to your point around deep fakes, you know, there's a lot of work that was just done in the media around Cyber Monday and Black Friday, around looking for scam websites and everything that AFP has done a lot of messaging around that that is the same again, they're using, they can put those up way quicker. And we need to counter that with AI, not just afp, but across, across the kind of government to actually deal with that. And hence why we've been putting new messages out there about how to identify those websites, how to identify those images. And I'm sure there'll be more messaging coming up with the election and really going to what's a credible source of that information and where is it coming from? And then initiatives like Adobe, Microsoft, BBC doing watermarking in relation to putting images out there, putting a cryptography watermark inside imagery so that you can say that this imagery was captured by the BBC and put out by BBC and you can do those types of checks and many others that will become critical over time as well, because as the deepfakes get better, you need to look at where the source of that information is coming from. [00:33:00] Speaker B: And they are going to get better. They're getting better every day. And it's even concerning that I've got a security background, you've got both your crazy cool backgrounds. It's going to be worrying for the average person, person to be able to combat that. And then the next point I want to sort of just quickly touch on is ethical considerations. Now I ask that. I mean, I've been speaking to people all over the globe about this and I'm not. There's not, it's not an easy one to answer. So I don't necessarily. You don't have to answer in terms of a binary response, but it's just about hearing your thoughts with the work that you're doing. As I mentioned, it's very different to like a bang your money back, but with your type of Work, it's people lives that we're talking, talking about that you just can't necessarily get back. So I'm really keen to hear from your perspective, what does that look like from, from your point of view? [00:33:44] Speaker E: I guess, you know we're a signatory to the ANSPA AI principles, so. Australia New Zealand Leasing Advisory Agency. [00:33:55] Speaker B: Yep. [00:33:56] Speaker E: And you know there's some, some key things that we subscribe to there and I think you know, really being transparent around how we're doing things. But really what Ben touched on before is you know, we have that human led approach to AI and that's really critical because ultimately we have to be accountable for our decisions and they have to be show that they're well considered. Particularly when you're talking about business laws or just business decisions you make that impact state deep the community. So and our community has a right to understand what they. So I think you know, those kind of core values are critical to how we improve space on AI. We, we often talk about, you know, that we at least make consent and we need the social license of, of our community. But it, it is, it is inherently true that you know, if we don't have the trust of our community, the challenges we're facing from a risk perspective now are not something that we can respond to on our own. So when I talk about prevention and education and talking about how these tools might be used for that, it goes to that whole point that I need people sitting in their lounge rooms at home to be as accountable as I am to having a conversation about welcome safety intervention group because the reality is for our children, you know, the online world, because it's real for them, is this group we're sitting in right now having this conversation. It might have been for me when. [00:35:30] Speaker B: I was a child. [00:35:31] Speaker E: So how do we make that experience positive and safe? So that's how I look at it from my prime time. Obviously more broadly in the enterprise, you know, we have that really important responsibility Spear Nicole with the use of AI because of the fact that we need partnerships. We need partners to respect us as well. Not just only our government and our community, but we have, we require global corporations to part time these days. So we need partners. We want to be that partner of choice, the afp. And to be that partner of choice, you have to be seen as principled and ethical in how you conduct your business. [00:36:08] Speaker D: Yeah. And we've had to change the way that we do this because we used to do it around pretty procurement and when we buy a tool but now you don't have to procure these tools. Some of them are within systems and processes. So we've strengthened our governance internally with our. We have a Responsible technology committee now and it is about responsible use of technology. We've got more conversations with our university partners. We've got the AI for Law Enforcement and Community Safety Lab in at Monash University. These things are really kind of leading to a more robust and a lot more nuanced conversation about the use of AI. So we know we can't just walk away from AI. It is not going to be. It wouldn't keep the public safe. So we have to be able to find that balance of where the community expectation is that we use it, but at the same time without overreach or overstep. [00:36:58] Speaker B: Speaking of that point around overreach, what about concerns relying biases for example? I know that's coming up a lot in my interviews, how to manage it. So what do you have any sort of commentary around that obviously with the work that you're doing? [00:37:10] Speaker D: Look, I think, and this is where understanding the AI itself between a black box capability and even with, you know, generative AI, which you may not be able to look at input, sorry, look at the model itself, you can do testing on efficacy and bias is absolutely critical. We are using larger models outside of just law enforcement holdings because that could skew data for where we need to have a more generic model. So I think those things, and that's where our partnership with Microsoft and others is really key because we want to make sure that just like they're doing now with other industries, they're building these tools to remove a lot of that bias and doing a lot of work on that. We need to understand that and have that really open communication. But it is not just a law enforcement single data set. We're looking at a way broader societal cross section data set. So which is critical for, you know, some of the cases that have happened overseas especially we want to make sure we avoid that here in Australia because it is a risk and it's something that we have to be really cognizant of and something we have to do serious testing around as well. So the good thing is with most of the other models you can do some, you know what your training data set is, you know what process you're looking at. So they become a lot easier when you talk, talking about the more kind of standard AI models. But we also don't use it in certain areas because the community expectation is not that. So we're just. It is that balance do you think. [00:38:36] Speaker B: By leveraging that as well it would eradicate like hallucinations, for example, if you're looking at all of the sources and trying to get sort of more of a general consensus? Because I know you've got LLMs, you've got, you know, like SLMs and things like that. And obviously it's just how to get more well balanced and find that equilibrium. [00:38:53] Speaker D: Yeah, definitely. I think there's, I think it's not, there's not. There's a few things we can do. There's adversarial processes where you're actually training AI to train, to check the AI as well. And, and that human oversight is obviously really critical, but then also like you're saying it's about what tool is going to be suitable for that job and what for what. And not just a kind of pure technical output, but also about what is the output that where you're putting in front of somebody. We've done a lot of training internally to train our people and especially our SES as senior kind of executive to know the limitations and advantages of AI because it's no longer this back office kind of tool that sits there and that's the data scientists and data engineers using it. It's actually now in the front, the hands of frontline members to help make decisions. So that means that we need to be doing, we are doing more training across the organization, not just in our technology areas, to understand the limitations and risk with AI and that use case for that specific outcome, what it actually means and what is the risk of it being wrong and always going back to that original data and always making sure there is that human oversight in that process, especially when we start talking about warrant activity or other activity to have an impact on a member of the public, so recitation of evidence of. [00:40:16] Speaker E: The law and just, you know, sitting in front of the jury and talking about how to bring that evidence and being able to explain that. And there was the use of a tool, again, acknowledging that use, but also showing what was the human piece to verify the, you know, the villainy, for example. [00:40:34] Speaker D: And the good thing is that it's not just us looking internally at ourselves. There's a number of oversight committees through Parliament that we have to present at PJCIA LE and the is. So for intelligence and for law enforcement, we also have Senate estimates, obviously, so, and plus the ombudsman and a number of other kind of oversight committees and bodies that are making sure that we are in the right place and making sure that we're doing the right thing as well. So it's not just looking internally within the organization. There's all the other mechanisms that government have put around us to make sure that we are being transparent and we are doing the right things. [00:41:14] Speaker B: So we are coming to the end of our interview and we're running out of time. But I would just like to ask you both, do you have any sort of closing comments or final thoughts you'd like to leave that audience with today? [00:41:22] Speaker D: Look, I think for me AI is an exciting future and there's a threat to it as well. With, with the criminality use of AI I think it's only going to evolve. This is a long game. I think this is not just about the next one, two years, this is about the next five and 10 years. So we've got to look at that horizon about what we need right now, but also start investing into those longer term horizons as well. But the AFP is leaning in, in an ethical and responsible way to AI. [00:41:54] Speaker C: Thanks. [00:41:55] Speaker E: My thoughts are, is that technology is a bit of an enemy for us, but it is also a huge opportunity and I think it's really important for us as an agency and this is where we are leaning into the partnership space. Partnerships are critical for us, whether it's industry dump and partnerships, tech companies. So we need to explore what are our partners that we need now and what partners are we going to need 10 years from now that we're not thinking about right now? Because those teach help. So yeah, it'll be an exciting time, I think the next nine to ten years. [00:42:36] Speaker B: Joining me now in person is Lee Williams, Chief Information Officer and Information Technology Executive from Brisbane Catholic Education. And today we're discussing the journey with AI Z. Lee, thanks for joining and welcome. [00:42:47] Speaker F: Thank you. [00:42:48] Speaker B: Now Leigh, I'm aware that you've just recently jumped off a panel and you've come here to do our interview. So tell us, what did you discuss? [00:42:55] Speaker F: Sure. So there was a couple of elements to the discussion. Firstly was around our rollout of Microsoft Copilot M365 and the Journey that we've been on, how we got there and what the main beneficiaries were of actually rolling out something like this being a data analyst session of the most recent one that I did. They were asking about the data and the proof of the data, which is great to be able to question the evidence. And we really talked through what we were seeing both from a teaching perspective and the workload reduction, but also from a student side in their social and emotional wellbeing as well. So Then from there it was really around, what else can we do to keep furthering this work? And what is 2025 going to look like for us and beyond? [00:43:44] Speaker B: So, in the keynote today, I noticed that someone was saying in terms of education sector, that leveraging copilot, for example, how much productivity they got back, how much monotonous tasks were removed, et cetera. So what are your thoughts with, obviously, the role that you're doing in terms of how AI can help transform the education sector, but any sort of insights you'd like to share? [00:44:06] Speaker F: Sure. So, to me, and when you talk to any, well, when you talk to any educator, they don't get into the profession to do admin work, they get into the profession to be with students. They love their students and care for their students. So by enabling technology that takes away the administrative load, the administrative burden, the planning burden that's placed on them, it gives them more time with students and doing what they're passionate about, which is teaching. So to me, that is just the greatest outcome that we can see is teachers spending more time with students, because that in turn is going to help students grow and flourish, both academically and socially. [00:44:46] Speaker B: And would you say if you were to zoom out, that would be the biggest opportunity in terms of leveraging AI in the education sector? [00:44:51] Speaker F: I think the biggest opportunity that we're still developing and working on, that we've been piloting is directly to students. So what we've already been trialling, that we're going to scale out next year, is what we're calling hyper personalization, because it is impossible for a teacher, if they've got 25 students in front of them, to teach a concept and teach it in 25 unique ways in 40 minutes, to tailor it to where every single student is at with their learning, the modality of learning that they're used to, and if they have any specific learning needs to cater for that. There's 25 individuals. A teacher can't be expected to do that and they don't. So AI can leverage that power and say, well, give me the lesson plan and I will personalise it for exactly where every single child is at and give it to the. Give their lesson in a way that's going to best suit their learning mode. So to me, that is just a game changer for how we can actually educate students. [00:45:52] Speaker B: That's an interesting observation. When I look back, I mean, when I finished school, like 15 years ago, I remember being in. I was. I was obviously more of an English student than I was maths, but I don't think I did well at maths because I had the same teacher still, like year 10, 11, 12. And my parents said, like, she doesn't quite. The teacher doesn't explain it in a way that Karissa understands. And they're like, well, we can't change her for whatever reason. But I think, like, leveraging something like this perhaps could have changed my experience. Maybe I could have become a mathematician, who knows? But I didn't get that opportunity. [00:46:20] Speaker F: Yeah. [00:46:21] Speaker B: So in terms of my own personal experience, I'm just thinking it through as you're speaking, and I was like, I really. I wish I had that opportunity. So what do you sort of see, like, moving forward in terms of, you know, the impact that'll have on students now? [00:46:34] Speaker F: Yep. So I think first and foremost, engagement, because students are not going to learn unless they're engaged. So we start with engagement and getting them just in a learning environment, whether that's within a school context or when they're at home or anywhere else. So that engagement is number one, and then secondly, engaging in the concepts, the knowledge and the skills that they are actually learning and being able to have agency and voice over the learning that they're actually. That they're doing. So for what we're seeing with students is that they are just taking AI and running with it. And so from something that, like I used an example before about, you know, a junior secondary geography class learning about the rock cycle. Now, for a lot of students, that's not that fun or doesn't sound that fun. Sorry, geologists, but we can make it engaging and fun and to suit the learning of the student. And what we were seeing is that students were then going and learning even more about rock formations and plate tectonics and all these other things that they started learning about that weren't even part of what the teacher had put forward, but it was actually they were wanting to learn more because they were interested in it. So when you see that and see the depth of the learning that they're engaging in with the agency to even run it themselves, it was very powerful. [00:47:54] Speaker B: That's interesting because I think that now that you presented something or that the school's presented something in a way that people are more interested in, it's actually furthered their engagement and they're going on that rabbit hole to even learn even more. So the other side of students being really engaged is perhaps people's concern around, like, cheating or leveraging AI to answer things. I think I've seen a lot of. I mean, I'm a millennial, but you've got like the Gen Z, there's a lot of content online of people now going, getting their first job and in an interview and they've got AI or ChatGPT there and it's responding to the questions. [00:48:26] Speaker F: Yeah. [00:48:26] Speaker B: So what is there concerns around that from your perspective? [00:48:29] Speaker F: Yeah, we had initial concerns around that. The way we tackled that was through a number of fronts. Firstly, open and open and direct collaboration and dialogue. So bring to us what are your concerns? What are the things that we can go and test and run pilots on and determine are these real fears or are they actually just myths that we can bust in the concept of cheating, we actually busted that myth pretty quickly. So when we rolled out Copilot for students with senior secondary students, we found straight away they were using it for tutoring because they weren't interested in not knowing the concept. In fact, the feedback we got from students is they said, I don't want AI to build all the knowledge because if I walk away from AI, I don't have the knowledge. True AI does. So how do I use AI to help me so that no matter where I am, I will always have that knowledge that I've learned or that skill skill that I have learned and be able to take it with me into my career or my personal life. So that was really powerful to hear that coming from students. And so when we looked at what they were using AI for, they were using it more like a tutor and asking it more in depth questions. They were asking it to give them feedback on drafts or to look at different concepts or if they were writing say a persuasive essay, they would say to chat GPT or to any gen AI like co pilot, they were actually saying, well, here's my version of argument. Can you generate for me a counter argument? And so just that they would even think to do that, it would generate a counter argument that in turn would help make their argument better because they would go, oh hang on, if that's a counter, I need to make sure I've addressed that in my own essay when I'm writing it. So there was a whole heap of examples like that in the way that they were using COPILOT and AI to better help their own learning and their own essay writing. The other main area that we worked with was teachers themselves saying, well, if a student can cheat by using AI, maybe it's the assessment we need to look at in the first place. And so we need to reconceptualize what does assessment look like so it isn't about who can write the best essay, but it's actually about what is the concept, what is the knowledge or what is the skill that you actually want the student to walk away with and take with them for the potentially the rest of their life. So once you get to the core of what is it you're actually trying to assess then then saying, well, what are all the different ways we could actually assess that? That's not just let's write an essay. And so it really helped change the mindset of a lot of teachers to go, oh, actually, yeah, I don't need to write it as an essay. An essay was more a way to scale out an assessment and easier to mark. But now, again, that AI can help me with marking, can help me with assessment. Actually, that's not a concern for me anymore. So, yeah, I can be more creative with the type of assessment that I'm actually giving to my students. [00:51:37] Speaker B: So one of the things that you, when you were speaking, that came to my mind is even when I go back to the maths example of when you would do a maths test, you'd have a calculator. So it's like two plus two is four. Sure. But you suddenly show you're working out, which I would probably resemble to critical thinking. [00:51:50] Speaker F: Yes. [00:51:51] Speaker B: So people could say, are you say, well, people are cheating in maths exams because they got a calculator. [00:51:55] Speaker F: Yes. [00:51:56] Speaker B: Now, you know, copilot or AI is a new, like, you know, high tech version of a calculator. But the thinking is still there in terms of, well, how did you get to that answer? People still need to understand how they got there, how, you know, to your point, how the data makes sense, how they can discern between if you know if that is the right answer or not. So people are still so plored up in perhaps removing that critical thinking, then it's actually still there. Even to ask a specific prompt, the critical thinking still needs to be there. [00:52:23] Speaker F: Yes, Correct. And prompt crafting is such a great example because it's now so prevalent that it takes a very sophisticated prompt to develop a good, sophisticated answer. So just in the development of that prompt shows critical thinking, shows higher order thinking skills from that student, and then be able to put the prompt in a meaningful, palatable way that a large language model can actually understand and not generate some, you know, nonsense answer or hallucinations. So just the generation of the prompt itself can actually show critical thinking and analysis by the student. [00:53:01] Speaker B: So zooming out, we've obviously spoken about teachers and then Students. What about, like, the curriculum? [00:53:06] Speaker F: Yes. Yep. So this is, to me, what's exciting, because there's different. So many different ways that you could implement a curriculum. So, yes, we have an Australian curriculum that we all utilize, but it's always up to the teacher how that curriculum is actually taught. And so that's where the power of AI really comes in, because it gives so many more avenues of how. And it doesn't actually all have to be the same way just because that student handed in their assessment or showed their ability to do a particular skill and they did it by generating a PowerPoint presentation, the next student did it with an oral presentation and the next student built a 3D model. They all could be doing the same skill and concept, but just delivering it in completely different ways. And so AI can still ingest all of those assessments, still assess them and say, yep, that child actually does have that understanding or that level of expertise or mastery over a concept, even if they have handed in what looks like three different assessments. [00:54:11] Speaker B: So would you say there's still a little bit of a understanding on how to map out the curriculum, what this looks like, to your point, like, do we need an essay? Is that. Would you say that still evolving in terms of the education sector and how to leverage AI to instill that critical thinking is still there? People aren't cheating necessarily when they're leveraging AI. [00:54:29] Speaker F: Yeah, yeah. So there's probably a couple of things. There's firstly, perception, and it's getting past that barrier or hurdle of what are my own biases that I'm actually bringing to this, and what are my thoughts around AI and what's actually real and what's fact and fiction? So how do we help change perception around what's possible then, from a teaching perspective, then how do we essentially motivate and entice and engage educators into the space to even start thinking about it and running pilots themselves in classes to say, well, let's try using Genai to do this, or if we're going to do critical thinking in the classroom, like the example I used before on persuasive arguments, okay, I want you to get up and give a debate, but guess what? You're going to debate against AI. And so here's the context of the argument. You're in the for AI is going to be in the against. Let's go. And so it still has all the critical thinking elements there. That student could take that away and practice that at home. And that would be tutoring. That's not cheating because they're Actually just getting AI to help better their own argument that they're trying to deliver. So to me, there's lots of ways that we can still bring educators into this journey and into this fall. Are we 100% there? No, of course not. And it's still going to take some time. You've still got like any implementation of new technology or new inventions. You've got innovators leading the way, you've got the pragmatists who want to see the data. And so, yes, we've been releasing as much data as we can on our outcomes and then you're always going to have the laggards. But what we're trying to do is have the smallest group of laggards possible and how do we get them over the fence as quickly as possible so that we've created a movement? And once we create that movement, it's almost like you're, you know, you're going to be left behind if you don't get on that bus with us. So, so that's what we're working towards, is what's the movement we can create around this. [00:56:32] Speaker B: So do you think the laggard group of people is diminishing? [00:56:36] Speaker F: Absolutely, absolutely. From what we're seeing, even from six months ago to now, from the questions I get from principals or teachers directly compared to the questions that I'm getting now are chalk and cheese. Six months ago there was some fear around it. Lots of questions about data, show me evidence. Now the question is around, can I use it for this? Can I use it for this? Can I use it for this? And so they're now pushing the boundaries and as a technology expert, we're going, these are great questions. How do I set up a safe and secure sandpit for you or put up some safeguard rails so that I can just let you run and innovate with this? So it really has flipped from where it was even just six months ago, just by the types of questions I'm getting asked. [00:57:24] Speaker B: So, Lee, we are running out of time. However, I do want to flip to one last question. [00:57:30] Speaker F: Yeah. [00:57:30] Speaker B: For you would be around security side of it. [00:57:33] Speaker F: Yes. [00:57:34] Speaker B: So this is obviously top of mind. I know it's Microsoft number one priority. This is a cyber security podcast. So I'm really curious, like what comes to mind on the security side of things, specifically to the education sector. [00:57:47] Speaker F: Yes. So, and look, I will preface this to say that we just didn't get here by happenstance. We are on the end now of a three year cybersecurity uplift. So from Three years ago, like a lot of organizations faced, when a lot of cyber threats started coming in and being more prevalent across our markets, on top of our cyber insurance renewals, we needed to do something more with cyber. So we've just at the end now of a huge cyber uplift program across the board, regardless of AI. Just everything across the board for us. So we've got a much better cyber posture. That coupled with us building over the last two years our data governance and our data frameworks as well, and how our data interacts with other pieces of data. We've been doing that for a couple of years. So we are in a very good position now that we were not in, I would say, three years ago, that has now helped us leverage AI into the future in terms of security, specifically now, for us, there's two elements, because there is still the security that we need to put around our data, because we are talking about children, and when you're talking about medical data, when you're talking about court orders and things like that, those have to be held absolutely secure. And so they, they are still held absolutely secure and encrypted across all of our platforms so that just a general AI prompt will not surface that. And so we've ran significant tests and a lot of penetration tests around that. Then everything else we're saying, what are the guardrails? Where do we want AI to search and where do we not want it to search? So let's just start putting out guardrails. And I talk about it in guardrails, not barriers, because we say we still want you to go and innovate. But, but here's, here's the lanes that you need to play in. The last side, to me is the human element that at the end of the day, we still very much say to anyone, any of our staff, and say, if that AI has generated, say, an email for you to send or whatever it is, you still hit send on that email. It is still your email. It is still your ownership over what you do with that. If AI generates a lesson for you, you're still the one that goes and teaches that lesson. So it is still your lesson, your accountability over that. So, and I've even heard Microsoft themselves talk about, AI will get you about 80% of the way there. The other 20% is the human element and even the relational element, because teachers know their kids best, and so they still want to put their flavor on things, which is great. They still know that, Yep, I was going to teach that lesson. And AI, that's a great lesson. But I know I've got a child who has just walked into my classroom who last night, their parents have just gotten divorced and this kid is walking into my classroom emotionally unsettled and emotionally unstable. I can't teach this lesson that's been given to me no matter how good it is. I am going to work with this child of where they are at socially and emotionally first so that they are safe and secure. Then I can get on with my teaching. That would be very hard for AI to ever replicate that. [01:00:58] Speaker B: True. [01:00:59] Speaker F: So to me, security comes at the foundational level, putting in those guardrails. But over the top, we've still got humans there making sure that everything's okay and what is delivered is still in the best interest of the child. [01:01:13] Speaker B: And just to close out, do you have any final comments, final thoughts, sorry, or closing comments you'd like to leave our audience with today? [01:01:18] Speaker F: I would like to, I guess, challenge people to say I get asked a lot around, well, why go with this? Why are you doing this? Why are you doing that? And my challenge back is always, well, why not? So that's probably what I'd leave people with is to flip your thinking and say, why do it? Well, why not? And what's the risk of actually not doing this can actually be just as good a use case as why you would do it. [01:01:46] Speaker A: And there you have it. This is KB on the go. Stay tuned for more.

Other Episodes