Episode Transcript
[00:00:15] Speaker A: Welcome to KB on the Go. And today we're coming to you with updates from the Microsoft AI Tour on the ground at the International Convention center here in Sydney. Listen in to get the inside track and hear from some of Microsoft's global executives. You'll get to learn more about the exciting SFI and MSTIC cybersecurity solutions in depth. And you'll be hearing from a select few Microsoft partners. We'll also be uncovering exactly how the Australian Federal Police are leveraging AI to detect crime to keep people in our community safer, plus much, much more. KBI Media is bringing you all of the highlight.
Joining me now in person is Brent Arsenault, Corporate Vice President and Chief Cyber Security Advisor at Microsoft. And today we're discussing the Secure Future Initiative, also known as fsi. And today we are talking more about the learnings from a year on. So, Brett, thanks for joining and welcome.
[00:01:18] Speaker B: Thank you. Thank you for having me, kb.
[00:01:19] Speaker A: Okay, so Brett, obviously before we started, we're talking a little bit more about your tenure at Microsoft, which is quite long, long time. So perhaps talk to us a little bit more about sfi. I mean, there's a lot of acronyms.
[00:01:32] Speaker B: Sure.
[00:01:33] Speaker A: Tell us a little bit more about what this means.
[00:01:35] Speaker B: Yeah, no, I think it's great. I think, kidding aside, like, I've had five different careers just the same company, so it's been fantastic. I think when we think about SFI fundamentally every year we look at, you know, what's going on in the threat landscape, technology landscape and regulatory landscape, then we decide how we want to go addressed and work on the process, doing to protect ourselves in my role as the previous ciso, as well as protect our customers. And so SFI really came down to, as we look at the technology shifts that are going on, mobile, cloud and now AI funding, onwards, the platform, how we look at the increasing regulatory pressure. Added to that, a threat landscape that I think everyone's very well aware of with speed and sophistication. It was an effort to sort of rethink the way we want to go build our software and support. This series came down to three things. One, we really want to make sure that we think about how we're going to embrace and use AI to make a better and more secure world. Two, some engineering changes that we do and fundamental engineering changes in how you build software and services. And then three, really, we need to work on this regulatory harmonization globally because I think that many of the customers I met, including here in Australia, are really under a lot of pressure with this Regulatory. So that's really the fundamental framework of SFI is. And there's a lot of focus I think we see in the press around the engineering part of the project. And that's really broken down into three simple things. Secure by design, secure by default, and secure in operations. But just you want to ship things down there, but it's insecure, which we always do, but we're actually changing the level of defaults and so that when they go out of the box, they turning their things on like two factor authentication by the ball and making sure that things don't drift once you implement them. And so that's really. I think that's probably the easiest way to break it now. Too much?
[00:03:21] Speaker A: No, that's perfect.
[00:03:22] Speaker B: Okay, great.
[00:03:22] Speaker A: So one of things I want to talk you talk about with you is I've been watching the event that you recently had in Vegas. So Sacha was out there on stage that security is your number one priority at Microsoft. And obviously SFI is backing that up. So I want to sort of. I know that we're going to have a lot of time, but I'm keen to sort of talk through maybe there are three principles that this is anchored on. So could you talk through them and what they mean?
[00:03:47] Speaker B: Well, I think the biggest three principles for the engineering part which referred to is this idea of secure by design, secure by default, and secure in operations. And so across six pillars of technology that any company would work with, everything we build at design time, including threat modeling, including code analysis, all those things, they're all in there by design that when we shift things, we do more and more things with secure by default. So as I mentioned, turning two factor off on my default services that turn with like, you know, deprecating old legacy protocols by default using the highest level of security. You know, the new things we're doing is Windows 11 being able to run Windows 11 as standard user. Finally, all those things are things that we're doing by default. And that is all the operational river that goes into that. And so ensuring when people get these, they have to opt out instead of opt in for the security components that they mentioned before.
[00:04:40] Speaker A: Because of now I really coming into the fold, we're in the AI era, as people would say. So do you think that was probably the main sort of catalyst to implementing sfi? But also, you know, we had sort of a cloud adoption and then that caused people to change and then we had Covid and then we've seen AI sort of coming into it. So what would be your. Your thoughts? I Mean, obviously you've been in the business a long time and you're sort of curious then to know you've seen the evolution.
[00:05:02] Speaker B: I have. I think every time there's a tech change, whether that was from, since I'm as old as I am, from mainframe to PC, from PC to network to PC, from network, PC to Internet, from Internet to cloud at mobile. And so when you see these platform shifts, they fundamentally change the way you build software and the way you deliver services. We don't deliver disks anymore. Right. It's all on consumption and use this. The network is not as doesn't have the same control effectiveness because people are connecting directly from your laptop to your cloud servers, whether it's Salesforce or 365 or Google. And so you really need to rethink in the way you're going to protect things in that environment. AI was not the catalyst for it, but it was certainly a large contributor because it gets us new capabilities, both as a threat, but more importantly as capabilities for productivity and how we go secure in that model as well. So it was definitely part of the conversation, but it wasn't the only motivation for doing it. But I do think, just be clear, I do think that AI gives the defenders uphand and an asymmetrical model that we didn't have before when fighting against adversary.
[00:06:06] Speaker A: Sure.
[00:06:06] Speaker B: Super excited about that.
[00:06:08] Speaker A: Well, the reason why I asked you that question, Brett, is it's more. So would you say, in terms of the shift, as you just mentioned before, around the Internet era and cloud and all that, do you think now with what we're dealing with today, this has probably been one of the largest shifts in terms of how companies are approaching building software and engineering and their approach.
[00:06:25] Speaker B: Well, what's going to be the shift or the outcome? And you know, I look at the mobile phone that I have. I mean, it was seven years to get that device to have 100 million users and it was less than seven months for 100 million people to download Chat GPT. Right. So this adoption is so fast. And I think about the mobile phone and it was a phone today, I would argue most people don't use it in the phone at all and they use it for everything. And so I we're at the very tip of what's going to happen with AI. So it's pretty fascinating. I mean, I'm excited to see things that's going to be possible that just before you couldn't even do.
[00:06:58] Speaker A: And I'm curious to know in terms of the adoption, like I Said it was so rapid in opposed to, like when the Internet came out, people were pretty apprehensive. They were worried. They're like, oh, well, you know, the Internet's not going to be a thing. I think I read in papers. So why do you think the adoption was super fast, from your experience?
[00:07:14] Speaker B: Well, I think there's a general trend curve that we look at that almost all technologies, you're getting a compression of adoption for it. Part of that is because of the availability of it, Right? Because you have infrastructure, particularly in cloud services, even in the Internet. As an example, that device you're sitting on now, you're on wireless, I'm assuming. And we think, oh, wireless. Everyone is wireless. I was with Dell recently and we were going through. It took 20 years for wireless PCs to eclipse the wired casings. 20 years, wow.
[00:07:43] Speaker A: Right.
[00:07:43] Speaker B: But you had to have the card at one point, a network card, and you find this hard to believe was $700 for a network card. And so you have to have the infrastructure, you have to have the support you have to have. And that's what I think is amazing about AI in particular, our approach. One is this diffuse technology. When you have data centers around the world, you can get it to every. Every corner, every part super vast. And it's. And I think it had real value. People saw things that you just could never do before.
[00:08:11] Speaker A: So now that we've had, I think it's slightly just over a year since the launch of sfi. So I'm curious now to explore, like, what are the key learnings then from that that you can share with us today?
[00:08:22] Speaker B: Well, I think there's two sets of learnings. Interestingly, there's always the shiny part of security that we read about and journalists write about and you had on your show. And they're all really exciting, like some of the adversaries. But I think also there's the. I call it the pedestrian part of the job. That's still true. That holds true, it turns out. You know, like we look at passwords going from. In four years, we see 730 password attacks per second going to over 4,000. That's pretty scary. But on the flip side, if all you do is turn on free2fa, you're not. You will not be impacted by that, even if you did a complex password resident password in unit. So there's a lot of hygienic things you still need to go do that are true even for AI. Make sure your things are current, make sure you have the right identity in place, make sure you use strong authentication, lose privilege. So all those things still apply. But it is true that we have much more sophisticated adversaries and many more adversaries than we had in the past. I think the progress and the learning for us was when you do this, it's not just the technology changes. You have to do the cultural part of it. You have to do to embrace it. And then there is, and then there's having the mechanisms in place to ensure that you're doing all the right things. And probably, maybe more importantly we call them paved paths. But to do things at scale and you can have everyone go patch their machine but you come back a month later, we're going to do it again. You have to build a paved path for your developers and for your operations and engineering teams. They fall into the pit of success. They can't not fall into the pit of success.
[00:09:55] Speaker A: Okay.
[00:09:56] Speaker B: There's not just working. That's where the, my default in operations comes from. So we've learned that especially identity side you've got to get this identity infrastructure around for your users, for your services, for your cloud services and your service business. That is I always say hackers don't break in, they log in and think it's really true.
[00:10:15] Speaker A: So I want to find this on the cultural side of it now. So it's comes up a lot in my interviews with people is, you know, we've got to get the right culture. Yeah, but what does that mean for you specifically?
[00:10:24] Speaker B: I've been through a lot of cultural changes. They often come with leadership changes. I would say one of the roles, one of the things is also in the learnings security teams do good security work. And so I think when you have a we, I think we've had a very good security aware culture for a lot for at least last 10 or 15 years.
[00:10:43] Speaker A: Sure.
[00:10:44] Speaker B: But a security culture is very different than a security first culture.
[00:10:48] Speaker A: Okay.
[00:10:48] Speaker B: Which is when you say I'm going to make a trade off and I'm going to actually delay the shipment of my product, which is what I'm paid to do as an engineer, because I didn't meet the security bar or in the direct model, I found there might be a problem. And so having the support from executives all the way to the new hire that just started last week, you have to do that horizon to make it secured. And first, I do think it's important to note where I seen this trend more. So here in Australia, to be honest, in the last really people are saying, well, if you're doing security first, well how Are you doing anything else?
So I'm not sure what your day looks like, but I'm assuming you have more than one priority in the game.
[00:11:26] Speaker A: That's true.
[00:11:27] Speaker B: Yes, exactly. And so security may be our highest priority, but we still have to ship great software, make sure that AI works, make sure our customers, that we serve, consumers, enterprises, governments can all be the most productive companies they can be. So security first, yes, it is the prairie, but we still are doing all work and everything. And so we call it the charity of the end. Right. You can't do. It's not this or that, it is bug.
[00:11:50] Speaker A: That's an interesting observation. So I've come from a security background myself. I work for one of Australia's largest banks before moving into doing this type of work. And you said before around security first. So one, one of the problems that we used to face is going to engineers and they would obviously get stressed to say, well, our project's now going to be delayed because we haven't thought about security. And then, I mean, this is probably going back around 10, 12 years ago. So obviously now things are taking a shift to more, you know, security first culture. When would you sort of say that shift started to happen? Like people were aware of it, but they obviously used to see us as the bad guys and the people that were slowing down their projects. Now things are starting to change. But when did you sort of say that shift started to really become a little bit more ubiquitous for us?
[00:12:34] Speaker B: That probably started nine years ago when we started this path of trying to make everyone aware of what security was. And since you have a background in the security, do you know the difference between requested and required? And so it was requested to do security, but making it required via paid path. So that, for example, you can do, you can do a pull request and you can't do a push for your code for a build unless it passes a set of gates and varsity, that's required versus requested and so that. But doing that in a way that is not thou shall. I think this, my lovely quote I have was there and I think security people have struggled with this in the past, which is a famous quote of if you tell me, I'll forget. If you teach me, I may remember, but if you involve me, I will learn. So being more involved with the team to being a business leader and not a technology leader that's running around to the mix and then providing these paid passages, I'm not saying you have to do 100 things. I'm giving you a library that just does this for you which they don't want to do anyway, and they'd love not to have to build you an optimization library. So I think being more of a service provider is really helpful in that scenario. It also helped to be fair to have support from the very top in the company all the way down through and have mechanisms to measure. I think culture is really more of a. A reflection of behaviors. You don't say, this is our culture and tomorrow it's your culture. You have a set of behaviors that exhibit what that culture is supposed to be.
[00:14:02] Speaker A: That's where I think it's interesting because again, like when you're telling people like you have to care about security, I think it started to create more of this dislodgement between security teams and then technical teams throughout the business, or even the business, because it was sort of just people telling them and barking orders at them. So maybe the culture has. You've seen the shifts.
[00:14:19] Speaker B: Yeah.
[00:14:20] Speaker A: So then are things going to continue to get better? Because historically people would just not enjoy being the security teams at all. And you said before that you're seeing it more in Australia in terms of culture. Why do you think that is?
[00:14:32] Speaker B: I think more people were just asking us. They were worried about our priorities being everything they needed, not just a security thing. So I think they're just very aware of what they need from us. I think it's an you on this change. If I could illustrate with an example that was mined was globally shifting for me. We knew two factor Auth was a good thing to build and so we kept pushing two factor Auth.
And I remember people would see us coming because you had to have a slot reader, you had to have a special badge with three pieces on it. It had a lot of infrastructure and friction for our employees. And we even created a thing called the virtual smart card to get rid of the physical part, thinking that would be netter. One day a very smart person said to me, hey, from a design principle like you think about a human centered design, and said, what if you changed what you were trying to go do? And what do you mean? He goes, what are you really trying to achieve? I'm trying to get rid of passwords. He goes, great, make your vision. Eliminate passwords. I said, that's just words. It's gobbledygook. Have a math verse and it's words. And it turns out when we said, hey, what would you have to do to eliminate passwords? It fundamentally changed the way we built software. And so that's when we developed hello. And that's when we did. And by the way, when we get into this model, which is users love not having passwords. So when you have something that users love and it trusts, how they hit the Nexus. And so you take that design principle of how you can help people make their lives better, whether it's an engineer or an end user, and do it through technology.
It's actually, I know it's just words, but it made such an easy difference. It was really unbelievable. So now and they, I remember people pushing so hard on us, doing two fa and now they get, they get send me notes like, hey, I got asked for a password. This is wrong. What's happening? You got rid of all of our passwords and that, I mean that's the golden day for a CISO now.
[00:16:17] Speaker A: So Brett, how do we find the equilibrium between. We want our company be secure, but then also we don't want to introduce so much friction that people can't do anything.
[00:16:25] Speaker B: Correct.
[00:16:26] Speaker A: How would you find that balance with your experience?
[00:16:29] Speaker B: I think the thing I was mentioning about how do you think about it from a design principle helps. I think that made paths really help. Which I mean you say, listen, I'm not going to just force all this extra work on you. If you use my path, you'll be more efficient, more productive. And frankly, I think the use of AI and some of the things that we're doing. Just think about writing code and using copilot, the GitHub copilot. And like suddenly in a miraculous way, you're writing code 30% faster and you're getting use of libraries that are already pre configured and you know, are secure. It has free detection for static code analysis. So then when you're, you know, if you're using a couple of repositories, see, you know, you drive that, that part of it now and so you get broad adoption and in the day just make people more developed. There are some times where you're still going to have to, you know, the idea is not to say no, it's to say how sometimes you might have to say no way, no how, but you hopefully never.
[00:17:18] Speaker A: So we are running out of time. So you're busy. However, just to wrap up the conversation for today, what can people expect sort of from SFI moving forward and you know, when we do it going to be next year. Well, what are some of the things that you, you see that's going to happen on the horizon now?
[00:17:33] Speaker B: I think that you'll continue to see more and more understanding and evolution of how to use AI to better Protect the workloads that people are doing and the data that people are doing. I think you'll see a lot more energy. Other companies starting to adopt some of the cultural norms that we're doing where executive compensation change behavior, where you have system like our system. My favorite thing in our system today is every employee twice a year is asked one of the, you know, they do this employment surveys to do at the bank and they're very valuable. But there's always one question that's the most valuable which is can I do my best work here? And that will generally tell you in a healthy organization from a security perspective we have a similar question when we say are you supported to make the security trade off decision? 1 It's so simple that when 238,000 people answer that question, you get pockets of wow, these people are really supporting what are you doing right. You may have a problem over here. We should go address it. Is there a leadership issue or is there intent? Because you go let's, let's not assume something's wrong. Let's go investigate and understand and learn from it. I think that'll be great and I think you'll just see more and more software shipped by default. The security capabilities that are not high friction user experiences both for the IT department as well as for the user. And I'm hopeful for government to adopt this.
[00:18:49] Speaker A: So Brett, do you have any sort of closing comments or final thoughts you'd like to leave our audience with today?
[00:18:53] Speaker B: You know, I would just say more just from experience of being in Australia. It's been fascinating. Always the best things about global travel is you assume certain things, you learn things, the people are amazing. That was something else. But I think that it's really important to understand this is a team's work and it the securities team's job is not to just secure the companies to make sure everyone understands that but it is going to take great collaboration and cooperation between the private sector and the public sector. One of the things I heard here a lot is you know, you had least three or four new regulations this year. Some of the state laws may not align 100% all normal. But I think we need to really work on how we can go work together to make sure that we're providing regulation and support that holds bad guys accountable, supports the innovation we want to see in AI but also fundamentally it is not just a checkbox exercise. It's actually helping serve our constituents to be most of it'll be my global wish list.
[00:19:53] Speaker A: Joining me now in person is Janice Lee, General Manager of Microsoft Security and today we're discussing securing AI and AI in security. So, Janice, thank you for joining and welcome.
[00:20:03] Speaker C: Super happy to be here. Thanks for having me.
[00:20:06] Speaker A: So, so let's start right there. Tell me more about your thoughts around securing AI.
[00:20:11] Speaker C: The first thing I'll say is that it's actually not just about securing AI, it's actually securing and governing AI. And that's quite honestly the first concern that a lot of our customers have. It's not so much about protecting it against the bad guys. It's about how do you govern the use of AI and how do you ensure that your use of AI is compliant.
[00:20:36] Speaker A: What you've taken your experience around how to govern the use of AI, would you still say it's people still trying to figure that out, what that looks like? Because obviously AI is in terms of being more ubiquitous in the market. It's really emerged probably since 2022, and AI has been around for a while, but people are still trying to understand what that looks like within their organization. Do you have any thoughts on that, Janice?
[00:20:57] Speaker C: Yeah. I first say that we need to acknowledge that the adoption of AI, and Genai in particular, is happening at a rate that we've never seen before, any other technology in our lifetime. And the speed of that alone is something that a lot of organizations are struggling with because while there's a lot of excitement, there's also a lot of unknown. What I am seeing though, is the quick organization of teams coming together across entire companies. And that's something that we've not seen with other technologies where in order for any organization to adopt AI and be able to leverage it for its fullest potential, it involves the entire organization from the CEO on down. And they have to come together in a way that, you know, other technology adoptions have not forced them to come together. So I would say it starts there, which with different functional leaders, from the head of technology to the head of even HR to the head of governance and legal and compliance to functional, other functional business leaders coming together and aligning on what is the objective, what are the risks, and how do we want to collectively govern and guide the use of AI in a safe and secure way?
[00:22:26] Speaker A: Switch gears slightly. And I'm aware that there are simple steps in planning effective security for AI. So maybe walk us through, what are they?
[00:22:34] Speaker C: Yeah, you know, I would break it down and just say you can't manage what you don't monitor. So the first step is discover understand what AI apps are being used, are being created within your organization. I think there isn't a single single customer that I've talked to that doesn't have users who were very quick to adopt consumer gen AI like ChatGPT and others. And so there's a lot of that usage happening by employees in the workplace that we have to completely be aware of. And then there are of course teams that are out there wanting to deploy their own AI so that they can improve their customer experiences so that they could be more productive in their jobs. And so discoveries is step one, then two is establishing governance and sort of the rules of the road on what data can be used, what data can't be used, by whom, and so that preparation of data and making sure that your data is properly classified and labeled so that it doesn't get misused by gen AI. And then the third step is protecting it, and that's protecting it from not just the bad guys, but protecting it from being, you know, misused or manipulated by people who don't always have a bad intent. But you know, they're just very creative in looking for the right answers that, that they see.
[00:24:01] Speaker A: Okay, so you made a couple of interesting points and I want to explore them a bit more with you. So in terms of adoption, you said before, you know, it was very, it was quick. So now that we're in this AI era. Era, sorry, and then we're always like cloud adoption, would you say with your experience and your pedigree that the adoption towards AI has been a lot faster than we've ever seen in the other cloud, Internet, et cetera?
[00:24:23] Speaker C: Yes, I'll add to that. Not only is it fast, but what's different about that also is that it is not reckless.
So with other technologies that we've seen being adopted, you know, be it say, you know, enter enterprise, WI Fi, when they came about, and then there was the, you know, so that was mobile byod, then there was cloud, there was a lot less coming together across the organization and thinking about how do we truly minimize risk and be compliant at the same time. So we're seeing a lot of that happening within companies where there is a consciousness of doing it safely and securely. That I would like to think that a lot of it is because when we innovate with AI at Microsoft, we did it within the framework of responsible AI and that is the only way that we could introduce AI Genai into the market. You could not introduce Genai without a framework like that and customers have embraced it.
[00:25:26] Speaker A: So it's an interesting point you raised. So security was obviously peppered throughout the keynote in May 2024, Satya came out and said, you know, security is our number one priority. Been reading some literature online. Talk us through what does actually mean for Microsoft, what can people sort of expect now moving forward? And everything that you just sort of said, how do we sort of tie it together in a bow?
[00:25:47] Speaker C: What we do with the Secure Future Initiative is probably the biggest change management that any organization could ever undertake. And change management typically involves the harmony. Effective change management is the harmony of people, process and technology and tools. So that's what the Secure Future Initiative has done, which is elevating security to be the top priority that drives change across people, process, technology and tools.
[00:26:14] Speaker A: Sure.
[00:26:15] Speaker C: And we benefit from an innovator standpoint because what we learned to secure our own organization and Microsoft has a very large digital estate. Right. So we have a lot to protect as there's nobody else at our scale, there's nobody else really who sees the types of threats that we see. But we learn from the way that we have to protect ourselves and what we know about the threat landscape. And that then informs all of the innovation that we put out, including our own security products, but also all of our software and cloud services, services as well as our AI. So all of our innovation benefits from those learnings that we can now surface in a much faster way. Given that security is a top priority.
[00:26:59] Speaker A: For the company initiative, I know it's just ticked over to just be over one year old and I know there's a fair few people behind the initiative. I'm unsure the exact numbers, but I know it's quite a lot. And you mentioned before the operative word around change management. So how do you do that effectively? It's hard to get one person to change, let alone a whole company. And I think someone said it earlier, you know, if I'm being a techy years old. Yeah, it's not an easy thing to do in terms of, you know, the, the processes and the culture that's been engendered. How do you, how do you do that then effectively?
[00:27:30] Speaker C: Well, so that number, by the way, is roughly 34,000.
[00:27:35] Speaker B: Yeah.
[00:27:36] Speaker C: The equivalent of 34,000 full time engineers all working on secure, prioritizing security in their day jobs. So they're still creating products, they're still running their functions, but they have to make security a top priority, a top consideration in the jobs that they continue to do. And it's hard to drive change unless it happens at every single level. And that starts with Satya. It starts with his and the board and his leadership team on down and you know, it all goes back to just the idea that humans need the right incentives. And one of the things that we've done to address to make security a top priority at all levels, including, you know, every layer, is it's now a core priority for every single employee.
And in our annual review process, we have to talk, talk about what we've done to help the company be more secure. That is a part of our performance metric. Okay.
[00:28:37] Speaker A: There's a couple of interesting things in there as well. So going back to even the engineering side of things. So when you go and do a computer science degree, historically screen wasn't taught, right?
[00:28:49] Speaker C: Yes.
[00:28:49] Speaker A: I know this from being in the industry myself and now flipping over to interviewing people like you would be that we've had to sort of retrofit talking about security and changing that mindset, which is probably why you've backed the initiative that you're dealing with in Microsoft. So what do you think now moving forward with engineers? Do you think that a new wave of engineers are going to be. Security is a main priority because historically dealing with engineers, it wasn't. It was that functionality.
[00:29:15] Speaker C: Absolutely. And I come from the same background too. And I know that when you're learning how to write applications and software, you know, you're taught how to breathe the function and do it fast and effectively and not use too much resources. Right. But given the change that we're driving, there's a concept called secure by design.
[00:29:36] Speaker A: Yes, right. Yes.
[00:29:38] Speaker C: And secure by design is one of the three principles that we are now all leaning into. Secure by design, secure by default, and secure in operations. And so we hope that. And by the way, those are not concepts that Microsoft invented. No, those are industry concepts that are actually spearheaded by, you know, the forces that be that govern, you know, mainstream wide security and security regulations. And so secure by design and secure by default are principles that we hope are going to find its way into your, your traditional computer science curriculum so that emerging programmers can learn that you can still build secure software and do it quickly and innovatively, especially if you have the right tools. So there are more and more tools now available to software developers so that they can shift left with security principles and not just design it with security in mind, but also ship it with the right security defaults turned on to help users be secure from the start versus giving users an option to turn on security.
[00:30:51] Speaker A: So I want to sort of flip over to our last part of the. Because I know that you're so busy, we're short on time, but I want to talk through with you, Janice. AI in security. So we've sort of done security in AI now AI in security in its head. So walk me through it.
[00:31:07] Speaker C: So the same idea of copilot helping everybody be productive, you know, writing summaries or, you know, understanding what happened in a meeting or looking at what emails that they should prioritize when they get flooded with hundreds of emails in their inbox. Copilot in the security context offers the same benefits, but to security practitioners. One of the biggest problems that security practitioners have is, is what we call alert fatigue, which is all of these alerts coming from all these different systems, and they just don't know what to do with it. Right. So you get to a point where you just ignore them.
[00:31:43] Speaker A: Yes.
[00:31:43] Speaker C: What security copilot can do is actually prioritize the things that you shouldn't ignore. So help you find the signal from all the noise so that you can focus on the right things. That's one is helping you prioritize, and then two is helping you shortcut those mundane tasks. Most security practitioners will tell you that they spend a majority of their time looking through logs and lists and data in order to figure out what the heck is going on.
Copilot can summarize all of those activities and events for them in a way that would have taken hours and days into seconds and minutes. And that is a huge time saving for, I'll say, low value work that security practitioners no longer have to worry about doing.
[00:32:33] Speaker A: In terms of productivity, how do we see people moving forward, optimizing that? Because you're right, it makes sense. People are tired. And then as a result, when you're tired, you're not making the most important decisions. So do you have any, like, what's your view on that then, Janice, in terms of people now getting more of their time back to do more of the critical thinking harms.
[00:32:53] Speaker C: Yeah. So this is where we see the huge shift from low value tasks to high value tasks. And that's the thing that, you know, as any practitioner in any function should embrace. Because copilot isn't there to replace anyone's job. Copilot is there to do the things that, quite frankly, human. Humans are not, shouldn't, you know, their time is better spent doing other more important things. And so I see it as an opportunity for us to do the thing that, you know, we enter our fields to do, you know, to make an impact, to leverage our creativity, to leverage our intelligence, but we get caught up in all these mundane tasks that do the opposite. It doesn't leverage our creativity, it doesn't leverage our intelligence and our ability to rationalize way better than machines can. So that's where, you know, I, I think we need to just decide on what are those tasks that we are so willing to let go of.
[00:33:54] Speaker A: Right.
[00:33:54] Speaker C: And leave it up to Co Pilot to do so that we can do the thing that, you know, we, we enter a certain field to do.
[00:34:01] Speaker A: Towns of copilot from a security perspective, what excites you the most? We had to sort of maybe the one thing that excites you the most.
[00:34:10] Speaker C: When I think about the threat landscape and the number of threat actor groups that, that had exploded onto the scene because of ransomware as a service, the attackers outnumber defenders by 10x if not more.
So what AI and co pilot can do for us is it can help us become a part of the army of defenders. We've now got unlimited with, with Co Pilot, we have an unlimited army of defenders that can be at our side because, you know, it's really going to be hard to outnumber all of the attackers that continue to grow out there. So that's what excites me the most, is, you know, being able to level the playing field and tip it in the favor of defenders versus the attackers.
[00:35:01] Speaker A: So, Tanis, one last question for you would be do you have any sort of closing comments or final thoughts you'd like to leave our audience with today?
[00:35:09] Speaker C: Yeah, the final thing is, you know, just to the point I was making about this small group of defenders, you know, against this massive universe of attackers. Security is a team sport.
And while we provide some technologies that can help our customers be more safe and secure, we can't do it alone. And so it is super important for all defenders to come together, you know, all vendors to come together to offer our customers simpler solutions that work better together and, you know, treat it like the team sport that it is. Because, you know, there's a saying in psychology that external threats create internal cohesion and we need more internal cohesion within our industry.
[00:36:03] Speaker A: Joining me now in person is Chris Lloyd Jones, head of Architecture and strategy in the office of the CTO from Abernard. And today we're discussing the culture and change program needed with AI. So, Chris, thanks for joining and welcome.
[00:36:15] Speaker D: Thank you.
[00:36:15] Speaker A: Okay, so Chris, let's start right there. People talk a lot in the industry about culture, so I'm curious to understand from you. Walk me through your thinking or your approach or what comes to mind around culture for AI.
[00:36:30] Speaker D: Okay, for sure. So I think when people hear AI, they think of this as a big technology change piece. But actually, if we were to step back 24 months ago, when ChatGPT, when all of these technologies came out into the market, well, AI isn't new. We've had machine learning, we've had other technologies. And all these new technologies have been about how do you bring everyone in an organization along? So culture to me is about how do you empower people, how do you train people, and how do you enable people to make the most of these new tools that you're providing them?
[00:37:02] Speaker A: Okay, so because it is new, there's no real, like, blueprint. It's not like we've done this before necessarily. How do you train people?
[00:37:10] Speaker D: So we may not have done this type of technology before, but think about ChatGPT. Most people have a phone in their pocket. Most people have access to these technologies. So digital natives are learning about how to prompt, about how to ask information of AI. But if you were to ask, and I heard a great anecdote yesterday, someone was speaking to their mum and their mum went, well, I want to create a CD to apply for a job. And this other person went, okay, well, just pop it into ChatGPT. And they popped into ChatGPT and came back and went, that is absolutely rubbish. And the response was, it was just nonsense gobbledygoop. And that demonstrates that just because you have access to these tools is about providing instructions on how to prompt and how to use these in a sensible, informed way. So at Avanard, we think about, number one, what are the guardrails? What are the ways in which you can effectively roll these tools out to people. People whilst providing them with the training to make it useful. It's not just about roa, AI and cut heads, cut a workforce. That's not what this is for. It's about how can you make your employees engage, how can you make them feel happy at work and how can you make them more effective in what they do?
[00:38:18] Speaker A: Okay, so in terms of the guard rails, an interesting point because you're right, it's not just about, okay, we've installed the thing or we're doing the thing and then that's it. Would you say that's perhaps where people fall down and understand the problems and how it works? How can we effectively leverage this? Because there's still, it's still early ish days.
So what would you say in your experience, people overlook at times?
[00:38:40] Speaker D: Okay, well, I think of four phases of AI says tinkering with AI. The proof of concepts, the proof of value, the Second stage is trying to make AI useful than an organization. That's point solutions, that's solving problems here and there. The third area is maybe scaling that, taking a business process and scaling it end to end. And the fourth is total transformation. I'm seeing very few organizations at phases three or four. Most organizations are at phase one and two. And what they're overlooking is that this isn't a tech change problem, it's a people change opportunity. So I've talked about training and from a guardrail perspective, it's more. Well, if you've all got ChatGPT or other tools in your pocket, how do you make use of them? If you all have access to tools like M365 Copartlet, how do you enable your data, your business operations to be connected to those systems? It's about thinking about governance, the use cases and the prioritization in a way that you can make sure your organization is still complying.
[00:39:39] Speaker A: So, Chris, I want to explore a little bit more about the people side of it. Now, people necessarily are creatures of habit. So what I'm hearing from the chatter in the industry, but then also even on the floor today, speaking to people like yourself, that the adoption to AI was significantly faster and more prevalent than perhaps in other areas like the cloud, Internet, etc. So why would you say from a people perspective, people are more willing nowadays to adopt AI than perhaps in other sort of more advanced transformations that we've seen over the last 10, 20, 30 years?
[00:40:11] Speaker D: So I wouldn't necessarily say that people are more willing to adopt AI. So you look at machine learning, we've had that for decades. You look at ChatGPT models that could answer questions. Large language models have been around for a good few years. In 2021, we had GPT2. What ChatGPT did is it fundamentally took this AI, it packaged it up in a way in your pocket. You almost had this magic genie because answer things. So I think people are trying to adopt these tools because it can solve a problem that they might not otherwise have been able to solve. They've got someone that will continuously, forever listen to them and answer questions. Now, organizations want to adopt this technology because they believe it can help them solve productivity challenges, it can help them solve cost challenges, or it can help them to grow. But once you start using the AI, that doesn't mean that you have the skills to be proficient or to make the most use of it. And that comes back to what I mentioned before about the need to upskill.
[00:41:07] Speaker A: Okay, so you raise a great point. Around packaging it up. So I was recently after watching yourself on YouTube but also Satya on his response around Copilot is like the UI. So going back to like ChatGPT, obviously it's just created that interface for people to ask that question where it's a little bit more complex than before. So because of that, is that where the adoption is? Is this a lot easier for people? They don't have to think it through. I've seen some of the demos today already which is making significantly easy even from an engineering development perspective. You don't even really need to understand certain like languages anymore specific to Python. It's doing it in the background. So in terms of training, how can people start to understand what this looks like within their organization? Do you have any insight then on that front?
[00:41:51] Speaker D: Yeah, for sure. So Satya talks about Chat as the new interface here for UI and I think that's certainly true for where we are today. But I think these tools are going to start to be embedded more into what we do day to day. So you open up a legal contract and it's been pre marked up for you with what you need to review. So chat to me is a transitional phase and not where we'll end up. I personally believe in the concept of ambient AI and that being infused into what we do day to day in the same way that a one point spell check was considered AI today it's just a button that we click and we don't think about it.
[00:42:21] Speaker A: True.
[00:42:21] Speaker D: So going back to governance and training and how we can adopt this in an organization, number one, Avanade rolled out what we call the school of AI. We trained every single employee in our organization on what AI is and what it can do. We defined responsible AI principles and digital ethics that people knew if I use this tool, this is how I can use it responsibly. Recognizing that people will use tools like ChatGPT or GitHub Co partners and number two, we rolled out prompting how can you engage with AI? And number three, we explained how to know what you don't know. If you know what AI can do, when do you hit the limitations of the tools that you have so you can remain ethical, you can provide answers that are of high quality and you're not just regurgitating information that might be made up knowing how AI thinks in different ways and therefore you need to be careful and have a critical mind.
[00:43:11] Speaker A: So I do want to get into the responsible AI side of things. But with before we do so, going back to the prompting side of things what you just discussed with, you know, fancy new ui, it's a lot easier than for people. So what would you say that companies now are coming to you and asking you questions around, Is it still like, hey, like how can we use this UI to ask the right questions to increase our productivity? I mean, I've heard that a little bit in the sessions today in terms of how much Copilot specifically can increase people's productivity, but there's still a lot of questions around, well, how can I leverage this within my company internally?
[00:43:44] Speaker D: So we've had Copilot for a number of years. And don't get me wrong, Copilot is a brilliant product, but I think the tenor of the conversation I'm having has changed. A couple of years ago, this was proof of concept or proof of value. Just testing the organizational technology, rolling out tools like GitHub and just seeing do they work. I think now we're starting to see organizations have proved the concept, they proved the fact that I can have value. And now looking at, okay, I'm ready to scale and implement this into my business processes. How do I think about principles, how do I think about business process engineering? We have an enterprise architecture, as we did for mainframe and cloud. What do I need now need to do for my enterprise architecture? For AI, they're really thinking about this in more of an enterprise fashion.
[00:44:26] Speaker A: Okay, so scaling AI, I've heard that thrown around a little bit today.
[00:44:30] Speaker D: Yeah, yeah.
[00:44:31] Speaker A: What does it actually mean?
[00:44:33] Speaker D: So to me, scanning AI means it's production grade and I'm going to break that down. I know that's a very like high level answer.
[00:44:40] Speaker A: Sure.
[00:44:41] Speaker D: So a year ago we had a number of tools so I could get data into to get a bit techy here, a cognitive search index. I could take data and put it in a format. Then AI could query. But someone might have to export data from a database into a spreadsheet and upload it somewhere else.
[00:44:55] Speaker C: Sure.
[00:44:55] Speaker D: Well then you've lost that chain of custody. If the data gets changed in your sales system, someone would then have to re update the AI. So that's how do you get your data from A to B in a way that's consistent and safe, then it's okay. In the past we rolled out chatbot interfaces. Someone can ask a question of their data. Well, a lot of people don't want us to answer questions. They want to solve problems. So now I think people are thinking about this as a service design. So they're starting to think about what is the holistic experience. I'M not just going to add another sparkly button to my ui. What problem am I actually solving? So for example, do I need AI or should I cut this out? Am I doing this for the sake of it? And then finally, how, again, going back to governance, is this a use case that's really adding value? Is this a use case which is responsible? Do I have the right data stewards involved? It's just about taking all of the bits and pieces that you do with a prototype, throwing them away and thinking about is this ready to.
[00:45:51] Speaker A: So going back to your comment before they want to solve problems, when you're saying they want to solve problems, but people obviously do want to do that, but is it more so how do we go about navigating that? Is that the part where people are still unsure about how to do. And to your earlier comment around the top of the interview, around the guardrails, is that what's missing? Would you say the guardrails?
[00:46:12] Speaker D: I think the guardrails are missing, but I don't think that's the whole picture. So I think you talked about solving problems. People can identify the use cases in their organization that they want to roll it out. But then they might go, okay, say you want to optimize your sales process, you want to look at what leads might. I want to speak to though that data could be in Dynamics, it could be in Salesforce. How do I connect to that source system in a way that's repeatable? In the past, if you were doing reporting in say Power BI or Tableau, well, there's a data team that sat in the middle there. There are the intermediary. They made sure if you asked a question question, everyone else got the same answer. One person measured sales in the same way that another person did. If you're developing an AI system, all those same tools need to be in place and that becomes your guardrails. And that's why these proof of concepts sometimes fail. Because yes, we want to prove the tech works and now we need to go back and do all the same things we would have done when we rolled out Power BI and reporting.
[00:47:03] Speaker A: That's interesting because I actually was a reporting analyst on average tableau. So I do understand, understand that. But I used to spend a lot of time doing real manual stuff. Yeah, building dashboards, pulling all the data sources together, trying to figure out from the business, well, what, what is the problem that I'm solving? What do you want to know?
[00:47:19] Speaker D: And you must have had teams send you Excel spreadsheets going, well, I'm Getting this number and you're like, no, it's this number. That's the standardization.
[00:47:26] Speaker A: Yes. And then you get a call from the CISO saying, hey, I've just presented, I believe the numbers wrong in the meeting. Yeah, I mean, it's not a, it's not necessarily an easy problem to. So where are people sort of starting? Because again, depends on what organization you have. There's a lot of data in terms of sources that are being fed. How do you get that standardization?
[00:47:44] Speaker D: So organizations can start at the top, they can start at the bottom. It needs to be bottom up and top down. And I don't just mean that as a kind of buzzword. So talking about bottom up, organizations generally are either highly centralized, so they've got a lot of control over their systems of record. They might go, my employee data is in workday, for example, my expenses data is over here. And in those organizations it's relatively easy to identify the data steward. And then you might be focusing more on organizational change, but in multinationals that might span, say Australia, New Zealand, Europe, generally, they're going to be a lot more federated. And that means you need to benchmark. Do you have the basics in place? Have you already identified your systems of record? You could be using three different travel booking systems and then you might want to identify the domains you care about, your hr, your travel. Once you've got your systems of record and you're speaking the same language and you're speaking in the same way, then you can start to think about AI. So you can't just jump to AI to be successful. You have a data foundation in place, strong data stewards, strong business use cases to enable you to make those changes.
[00:48:47] Speaker A: But when you say people are jumping to AI to be successful.
[00:48:51] Speaker D: No, I don't think people are. I think people roll out Copilot and they might make their knowledge finding better, but that might indicate that maybe their knowledge architecture wasn't great in the past. So Copilot's just an incredibly great search engine when they want to take it to the next level of productivity, such as sales or invoicing. At that point, you can't escape the fact that you need to improve your data stewardship and your data quality. From the conversations that I've heard today, everyone is finding Copilot a great way to draft emails and a great way to find information. But only the organizations that invest in that data platform are making sales more efficient or invoicing more efficient or helping to serve their employees.
[00:49:29] Speaker A: People often speak about, like structured data and an unstructured data. So how. Talk to me a little bit more about that. How does that then look in fitting into the AI beast?
[00:49:39] Speaker D: So AI, when we're talking about it today, we generally mean generative AI, so the generation of text or images. But AI in the past, machine learning was the kind of forecasting and the analytics from the past to analyzing what's happened to date. Now, when I'm talking about AI in the context of making this more effective, you need to consider all three, because traditional machine learning will still come into play for how you analyze your structured data. For example, if you're an investment firm and you want to maybe make a virtual agent that can help you identify customers where there might be an opportunity to optimize a portfolio, you're going to need to use machine learning to identify maybe the right traffic trades to make the clustering the data. And then you feed that through as unstructured data. So you change it from numeric data to data that the machine learning large language model can actually analyze to make those decisions. Both of those data formats are important. Large language models is like programming through language, commonly English. But unstructured data is ultimately required.
[00:50:38] Speaker A: So now I want to flick over and talk around trusting AI. Yeah, if you want any of you to talk about that and then we'll get to responsible AI. I know I asked that before, but talk a little bit more about what does that mean for you when I ask you that question.
[00:50:53] Speaker D: So for me personally, when I think about trusting AI, I think about a number of different things. I think, number one, is this going to actually help me do anything I want to do? Can I rely on the response that I get back and can I make decisions from it?
[00:51:07] Speaker A: Right.
[00:51:07] Speaker D: And say I'm booking a flight and I ask when the next flight time is? If I can't trust that information is correct or accurate, then that's been a waste of my time.
[00:51:16] Speaker A: True.
[00:51:17] Speaker D: And that was a pointless AI interaction. So that comes back to. We talked earlier on about measuring value. There's a value gap. I talked about data. There's the data gap. The third gap I think of is that trust gap. And one, do you know you're speaking to AI?
This is a very personal example. I had five instances of this cloud product charge to my credit card the other day. And I was really frustrated. And I raised the ticket on this website and got an automated response from this chatbot by email going, we've logged it. We've already issued you a refund. I'm like, no, you haven't I raised it again and I was getting more and more frustrated. And for me that's trust because the AI isn't listening to me. It's trust because it wasn't immediately obvious until I googled it. This was an AI chatbot and it wasn't immediately obvious to me that I was getting a good answer. So a lot of organizations need to think carefully because if you are looking to replace activities that a human would have done with AI, you need to make sure that you're hitting the same standard.
[00:52:14] Speaker A: So I should trust. So you said, like, you know, for example, if it's a large Laker journal pulling all these sources in, it's like, you know, the sky's blue, but maybe there's some people out there that say the sky is orange. Yeah, Comes up and it says the sky, the sky's orange. How will people now I'm a millennial, so I'm not a Gen Z, so I've obviously been in Toucans when, you know, the Internet first came out, etc, so I think I'm privileged. But how are people moving forward be able to discern and not question, is that true? Because that's just what AI said. And I know that sort of comes into the responsible side of it and the ethics side of it. What does that then look like? And will people have to sit there and question everything to discern whether this doesn't look right? But if it's telling me it's fine, I'm really probably more worried about the generation beneath and moving forward.
[00:52:58] Speaker D: I think that's actually a really good point. And I think there's two faces to that. I think realistically, if we think about where we are today, that is where we are. We have to use critical thinking to discern information that might be real through information that might be false. And that's not just about the large language model making things up. That's also about the use of AI to disseminate misinformation online and to steer conversations. There's a recent study piece using GPT4 to curate someone's timeline on a social media network. They provided them with a deepault network and two curated versions, one relatively left wing, one relatively right wing. And they found that people's views would shift if they weren't told it was AI and that within a day their opinions could be changed and that would roll back. But this was just one study. Now, going forward, there needs to be a partnership from my perspective, between the media, between the state in private organizations to solve this and this Isn't just cloud, like made up stuff that should happen theoretical. There's an organization called C2PA Content Provenance and they're an open source organization filled in with standards to certify that the information from the microphone you're using hasn't been altered between when we recorded this and when this goes out, or that this information, this photo was taken here and you were seeing the real thing, or that a human wrote this and you're reading this and we're starting to see those standards be uptaken by TV channels. If you're on Google, if you're on Bing and now certifies if the image you're seeing is AI generated. And I do think we need those standards to have trust. That's going to take some time for that to be built and it's not.
[00:54:26] Speaker A: An easy thing to sort of solve. And I've spoken about that with people on the show historically, but also today. So I want to talk to you a little bit more about hallucinations. So going back to my point around the sky is orange like that. That's a hallucination like that. That's not true. But then who gets to decide, well maybe it is true because maybe I'm colorblind. So how does that then look in your eyes, Chris?
[00:54:49] Speaker D: Well, you think about what hallucination is. Another way of thinking about it, which I quite like is groundedness because hallucination gives the impression that the AI is thinking like a human might. So an AI that's making stuff up is ungrounded, it's gone off the context it was true trained on. And this happens because large language models as is in the name is trained on large bodies of text. So they've been trained on what's online, what's encyclopedias and what's in other private data sets and they are grounded based upon the prompt that you give them. So if you tell an AI, pretend you're, I don't know, you've got an IQ of 180 or you're from Star Trek, you might get very factual responses. They might be grounded in sci fi. If you tell an AI something along the lines of, I don't know, like you're an 8 year old who'll get a childish responses. The AI will only act based on what it's seen other people do online. That's why if you are very polite to an AI, you say please and thank you. You get better responses because on online forums that's how people respond to each other. So you can actually prompt an AI to act in a certain way. Now we have to recognize that AI is super powered predictive text. It's like typing in your phone. It's just predicting the next sequence of words. And therefore I think the organization's implementing AI, they are responsible for what they produce. But as a consumer of these, we still have to engage our critical faculties in order to make sure that we are getting value from these systems. Because no system is infallible.
[00:56:10] Speaker A: Being as an industry, we're not quite there on how to manage with the guard rails in ethics, responsibility of AI, we see this is growing quite substantially and there's a lot of use cases where I mean it's super powerful but it's like a double edged sword.
[00:56:25] Speaker D: Yeah.
[00:56:26] Speaker A: So how do we sort of manage that going forward where you still need to leverage this and with all the use cases we spoke about here today in Microsoft, but then also it can damage us as well. What's your view then on that?
[00:56:39] Speaker D: So I think the genie is out the bottle. We have to think about how we govern it. There may be negative impacts and in an ideal world we would be able to mitigate them all. But realistically, the technology is going to be implemented and that means we need to think about the people aspect of this, the process aspect of this and the tech aspect. So Microsoft has released a number of different responsible AI tools. So you can scan the technology, you can scan the text that's coming out for forms of potentially explicit content, potentially racist content and other forms. And that's a tech mitigation. We can make our AI more deterministic. So when you ask a question, you get the same response every single time. That makes it easier to govern. We can track the inputs and the outputs, we can adjust our processes so that AI is being used for areas in our level of risk and that a human is always in the loop. So if I am a radiologist or if I am a cardiologist, AI might direct me to the red flags that I need to look at on a particular scan that it isn't making the final decision decision. And that I guess goes back to co pilot. Copilot is a co pilot. It isn't an autopilot. Human brains need to remain engaged. And before you mentioned that you're a millennial. I'm a millennial thinking about the next generation coming up who've had chat GPT since day one of going through university.
For me it's the people that work in Texas, craftspeople, they know what good looks like if they're using GitHub Copilot they can go Great answer. Not a great answer if I've had it in day one. I need support to identify what good looks like so I can become a craftsperson, build my skill, and I'm not just relying on the AI and the answers it comes up with.
[00:58:19] Speaker A: So Chris, we're running out of time. However, do you have any closing comments or final thoughts you'd like to leave our audience with today?
[00:58:25] Speaker D: Yes, if there's one thing I think we should think about and maintain an eye on, it's the sustainability of AI. I'm really pleased with the commitments that the hyperscalers are making the sustainability of AI. But ultimately AI uses a significant amount of energy and I think working with organizations like the Green Software foundation and others, we should be making sure we continue to think about the land use, the water use and the environmental impact so that AI can help us solve challenges and not make more.
[00:58:52] Speaker A: And there you have it. This is KB on the go. Stay tuned for more.