Episode Transcript
[00:00:14] Speaker A: This is KB on the go.
Welcome to KB on the go. And today I'm on the go at AWS Summit here in Sydney, located at the International Convention and Exhibition Centre. This event is over two days and we as an industry get to hear from our AWS experts, customers, partners and thought leaders from a variety of industries. And right now I'm joined in person by Min Levinidas, head of digital policy from AWS, and Phil Rodriguez, commonly known as Phil Rod, global head of customer security outcomes for AWS. And today we are discussing chain of events. So tell me what's been going on since I arrived?
[00:00:56] Speaker B: Well, we're here at Sydney Summit and I just want to say, favorite day of the whole year at AWS, we always get a really good turnout of customers, as we saw walking around before down on the expo hall floor. There's a lot of partners out there who like to build and help customers in the cloud, and a lot of those partners are security partners. So it's always fun to walk around. We had seek up on stage talking about some advanced security threat detection they were doing building on top of Amazon Security Lake, one of our newer security products. We've had a bunch of other talks about security best practices. So, and those security docs are just one of maybe one 10th of all the different talks that are going on at a big event like summit.
[00:01:34] Speaker A: Just before we jump over to minh real quick, I just want to get back a step. Critical national infrastructure. Do you, how do people sort of take that, like, within the cloud? Because even when I interviewed you last year, episode 200 people still kind of didn't get the cloud thing. So now we're talking about infrastructure. Are people sort of modernizing their approach to cloud adoption, would you say, Phil?
[00:01:57] Speaker B: Yeah, absolutely. We continue to see a whole range of different kind of motions and different customers in the market. We've always had the super advanced, I'll say, software builders of the world, the Atlassians, and the canvas, racing as fast in the new technologies like cloud as we could. Australia is pretty well known globally for having a lot of big banks and different financial services, regulated industries running very important systems in the cloud. And absolutely, the federal government recognizes cloud both as critical of infrastructure itself and also chooses AWS to run some of its most important workloads. Like the census is a really good public example.
[00:02:31] Speaker A: So let's swap over to min now, what's been happening here on the ground, from your perspective?
[00:02:37] Speaker C: For me, from a public policy perspective, it's an incredible opportunity to hear from our customers in terms of the key policy issues that are relevant to them. Obviously in the fields that I have the expertise in, it's obviously cyber security, artificial intelligence, very topical today with our big announcement of the launch of bedrock in Sydney and critical infrastructure, and it was a pleasure for us to have Pete Ansi, who's the first assistant secretary for cyber policy at home affairs, speaking with us on critical infrastructure resilience today. For me, that critical infrastructure resilience in the cloud talk was a really great opportunity for us to articulate to our customers, these are your potential legislative obligations, these are the things you should be thinking about, and this is how we can help you achieve that resilience. Whether you're a critical infrastructure provider or not, all of these things are equally available to all of our customers. So that's a really great story for us to tell, and hopefully it starts a lot of really productive conversations for us with both government and with our customers.
[00:03:48] Speaker A: Can you elaborate more on the bedrock announcement for those who are not familiar?
[00:03:52] Speaker C: Yeah, absolutely. So bedrock, as I'm sure many of your listeners know, is our artificial intelligence generative AI and large language model platform.
It gives our customers access to the leading large language models out there, both provided by Amazon and also by our partners, such as Anthropoc. It's really exciting for us to launch that in Australia, in the Sydney region, particularly for our customers that have particular latency requirements, but also for our customers that have regulatory concerns about data localization, as one example, this gives you the opportunity to take full advantage of our artificial intelligence suite right here in country data localization.
[00:04:41] Speaker A: Yeah, that's an interesting one, especially on the sovereign piece. So there are customers still out there. Sorry, there are companies out there still saying, hey, we're sovereign. But when you actually go through it, not really. What's with that?
[00:04:53] Speaker C: I think a big part of that is just understanding what you're regulatory requirements are for particular industries. There are going to be those types of requirements. We know that for the financial sector and the healthcare sector, for other entities, it's not really a question of where data needs to be located. It's about the types of security that you're able to put around it. And as we really discussed in that critical infrastructure session this morning, the types of resiliency that you're able to build. So for us at AWS, we have our two regions within Australia, in Sydney and Melbourne. That gives you access to multiple availability zones, and within those availability zones are multiple data centers. That's an extraordinary level of infrastructure resilience right here in country. And that is something that really should be and we know is a primary consideration for many of our customers with those requirements.
[00:05:48] Speaker A: Can you discuss a little bit more about what was discussed this morning in terms of the resiliency?
[00:05:53] Speaker C: So, Pete Ansie, our guest speaker, the critical infrastructure requirements in terms of the legislation that was passed in 2021 and 2022, as well as the proposals that have come through as part of the cybersecurity legislative reform proposals that came out earlier this year as part of Minister O'Neill's cyber strategy. And then one of our senior technologists, Jess Modini, she was able to give a perspective on how to build that resiliency in the cloud. And we were able to frame that alongside the hazard pillars as they're defined in the critical infrastructure reforms. That's personnel security, physical security and natural disaster security, supply chain, and of course, cyber. In that presentation, Jess was able to give our customers some tangible things that you can do using cloud on AWS to build your resiliency across those pillars and help these critical infrastructure entities, these customers, meet their regulatory requirements as part of that risk management program that exists in that legislation. So all around demystifying the legislative aspect and then putting it in practical terms of this is what you can do to help meet those requirements.
[00:07:13] Speaker A: So what do you do to help meet the requirements?
[00:07:15] Speaker B: Yeah, I would summarize it in two ways. We always talk about, from a cloud services perspective, what we call our shared responsibility model. AWS was the first one to publish this model, and I think it was 2013, and the industry has really adopted it as standard. Now, the service writer, AWS in this case, has a responsibility for a lot of the security in the ecosystem, the physical security of the data centers, the personnel security of our database administrators, the security of the hypervisor, and the other technology services we offer. But our customers also have a responsibility as well. When they choose to consume a service or they choose to apply a security configuration to that service, that's a choice that they make, and they have the responsibility for that. So what we said specifically to critical infrastructure was that a, we meet the requirements of the australian federal government's requirements around critical infrastructure through the hosting certification framework, I think it's called HCF, and through the IRAP assessor program to certify that that's our below the line or service provider responsibility. And what Jess also talked about, especially, was how customers themselves could follow our guidance to meet their responsibility across the four pillars that min defined, so that they could meet the government's requirement. So the bottom line was you can meet your critical infrastructure requirements on AWS. We meet it ourselves as a service providers, and we help customers with guidance so they can meet the federal government's requirements as well.
[00:08:38] Speaker A: Do you think the federal government's requirements are a little bit convoluted to understand in your experience?
[00:08:43] Speaker B: No, they're not. We had a really clear mapping across the four key areas that home affairs laid out and how it mapped to the three most important pieces of guidance from us, which is around resiliency, security and operational risk. We've had that guidance in the market for a long time now, many years, and we've helped customers by now mapping it against the australian federal government's requirement. So it's a new way of thinking about it, but it's not complicated.
[00:09:08] Speaker A: So recently, Phil, you've been promoted. Congratulations. Talk to me a little bit more about your role as global head of customer security outcomes. What does that actually mean for people?
[00:09:19] Speaker C: Phil?
[00:09:20] Speaker B: Sure. Global services is part of the AWS business that's existed for a while. It's a combination of all of what I call the people powered services. So different business units inside of global services is like AWS support, training and certification professional services. You could think of this as the people who are building and helping our customers on top of our platform. Inside of global services, we have a lot of security capabilities as well. So I'm part of a team that's looking at how we better deliver those security capabilities to customers. We do that today through a number of different areas. We train customers about security through training and certification, and we've got professional certifications around security. We help customers build securely through our professional services teams. Our managed services teams help customers operate securely, and we have a customer incident response team that helps customers respond to security issues if they need help. So we're continuing to look at these capabilities and how we can continue to iterate to help customers with security on top of our platform.
[00:10:18] Speaker A: Are you finding by doing all of that, which is a lot and probably more than maybe other benders out there, are you seeing the shift of the needle now? Because perhaps you're helping demystify people's understanding of cybersecurity and making it easy and simplifying it and giving them help and guidance, are you seeing now people's response and the move of the needle?
[00:10:40] Speaker B: Yeah, absolutely. Australia's a really good example for that. For example, five years ago we were talking publicly around how the financial services regulator, APRA, had updated their guidance to allow highly critical core systems for financial services entities to be running in cloud. I was lucky enough to be next to nib, the insurance company, when they announced their first core system running on top of the cloud, which is one of the first highly regulated systems in the world here in Australia. Fast forward to just the other day when NIb announced that they're done, they're out of their data centers entirely. All of their business, all of their customers information, all of their critical business processes now run on top of AWS in the cloud. So that's a really important milestone for the industry. I think you're seeing through our talk next to the federal government today, to have homeless affairs on stage next to AWS at an AWS conference. Talking to the whole industry around how cybersecurity is important is a good benchmark for where Australia has really come. The government understands important, our customers understand it's important. And obviously, we've always understood it's important.
[00:11:48] Speaker A: So a couple of things more on that. With your role now, what are you sort of seeing as we sort of progress forward into 2024? Because I spoke to you last year, obviously, things change daily in our industry, but what do you think is going to happen moving forward as towards the end of 2024? Now, last year, there was no real massive discussion publicly that you've had today in terms of, like, you know, critical infrastructure with government and stuff like that. So, obviously, things are moving pretty, pretty fast and pretty forward.
[00:12:16] Speaker B: So the focus on resiliency is a really common theme, both in Australia and globally. And we've talked about that a bit. I'd say the other themes that we're seeing from a security perspective are as many businesses are seeing the role of generative AI and how to be using that to really improve security outcomes. Happy to talk more about that. And then from a technical perspective, we're seeing customers ask for more help responding to incidents that they might have, using our expertise with what? With how to get security right in the cloud, and then also looking at some other forms of identity security, like authorization.
[00:12:49] Speaker A: Okay, let's go back to Gen AI. So, obviously, depends who you speak to. There are multiple views on the space, so maybe you know that with you, Phil, and just your perspective, and then we can flip over to Minh to talk more on the policy side of things. Just wherever, wherever your mind goes. It's just. It's a big topic. People still out there feel rattled by it. So.
[00:13:10] Speaker B: Sure. I'm going to interview Minh for a second here. Do you want to start with broader AI themes? And then I've got a couple of specific security examples.
[00:13:18] Speaker C: We saw AI become a particular policy focus over the last probably 18 months because of the release of broadly available generative AI products.
But the conversation around AI has obviously been going for a lot longer than that. I was involved with the initial development of the government's ethical AI roadmap way back in 2018. So what we're talking about now isn't necessarily something new, but the emphasis brought about by generative AI and the speed with which we're able to do these things now is obviously created that environment where increased not only government attention, but industry attention. How do we maximize our use of these technologies and really harness the benefits of them? And this is a really important part of the conversation in Australia, not just in terms of productivity, but if we dive into security specifically, and I'll hand over to Phil in a second, AI has that capacity to be, and I hesitate to use the term because it is so ex intelligence agency of me to say it, a force multiplier where you have these tools at your disposal to help you do things that you already do better. And that's one of the real benefits of AI. So if you're a frontline SoC analyst, you've got a ton of alerts coming through AI, whether it's generative AI or other forms of AI. And remember, when we're talking about AI, we're not talking about a singular technology, we're talking about a constellation of technologies.
There are going to be multiple things at your disposal to help you sift through that noise, reduce the noise ultimately, and perform your everyday tasks at a higher rate of speed and at a higher rate of accuracy. And we're seeing that across multiple industries.
Ciro released their report, I think it was just last week, about the benefits of AI in the healthcare sector, particularly in terms of, again, that augmentation of things that you're already doing to help drive better outcomes and increase speed. So it's a very exciting time for the technology here.
[00:15:41] Speaker B: So much of the technology industry, I've had a lot of chance to talk about generative AI over the last, broadly speaking, 18 months. Been talking to journalists in the US, in Japan and Singapore and Malaysia and the Philippines. And the simplest way that I can describe it from our security capability perspective is we're using generative AI to do one of two things. Either help humans talk to computers or help computers talk to humans. When we're helping humans talk to computers, we're using things like natural language query. For example, in some of our services, like inspector or vulnerability management service, you can just state in simple english terms and ask the technology, can you tell me about this thing that happened? And the generative AI's job is to go and fetch all of that information and to sort it and to return it back to the human in a way that's really able to understand. This is beneficial from a security perspective because there's a lot less security humans than there are security data out there. So it's very helpful to be able to take people with less technical backgrounds and allow them to interface with those computers. From the other perspective, it also helped computers talk to humans. And that's when we look at summarization. So in services like Amazon detective, that's looking at a number of different technology sources around the AWS environment and helping people make good, reactive security decisions, it's upon asking it, summarizing all of the different data inside of the environment and giving it to humans in a really simple, easy way to understand it. So that I see it as we're using generative AI from a security perspective to help people ask simpler questions of computers and to help computers summarize lots of data simply for people.
[00:17:18] Speaker A: I have another question. I saw an interview done a while ago, actually, and it was Sam Altman, the CEO, founder of OpenAI. But he made this statement saying, if technology goes wrong, it can go quite wrong. Now, this is the guy that sort of is leading this type of organization. What do you think he means by that statement?
[00:17:41] Speaker C: I think what we need to remember when we're talking safe and responsible AI, we at AWS and multiple of the major companies involved in developing and deploying AI services have made a series of commitments around delivering these products safely, but also helping our customers deploy them safely and responsibly. The regulatory processes that are happening around this now are an important part of that dynamic. Government has a really important role to play in setting the regulatory framework, setting the regulatory scene to boost confidence in these technologies. What we do and what we're doing right now is ensuring that we're building safe and responsible AI into the technologies that we're deploying so our customers can get in there and start using these products with the confidence that they're going to be able to meet these thresholds that will eventually be set from a legislative perspective and are being set internationally from a standards perspective. All of these processes that are happening at the moment, I spoke to them in the podcast that we recorded together last week about the international processes, the domestic processes, the standards, the guidance, the frameworks they're creating at that international level, a fairly cohesive set of arrangements for how we do this. That international cohesion is going to be a really critically important part of mitigating any risks around AI, in the same way that you need to mitigate risks around any new technology.
And this is something that we shouldn't be. Technology inherently has risks attached to it. We know this. Thats why we put regulatory measures in place around it. And thats why we AWS, and we as the broader technology community, are so committed to doing exactly that when it comes to the question of AI more broadly.
[00:19:47] Speaker A: And just to follow on from what you were saying, Minh, what does sort of responsible AI look like? Now, the reason why I asked that question is media. Im hearing a lot of this responsible AI. We can do things ethically. Do you think people have different versions of responsible AI, and then, if so, what are they? Or what's AWS's version?
[00:20:05] Speaker C: Well, AWS, we consider six different dimensions of responsible AI. And for us, it's really key to follow the science around this. So our key dimensions of responsible AI may change as the science changes, but fundamentally, we're talking about bias and accountability, transparency, security and privacy.
We build these things into our products every step of the way, and we release tools to also help our customers do that. So, Sagemaker, clarify. That's one of our key products for assisting our customers in developing safe and responsible machine learning practices. We have our responsible machine learning guide that's available publicly for all of our customers within bedrock and within our generative AI services. We've made the voluntary commitments to the White House that commit us to delivering not just on a certain set of principles, but a certain set of technical specifications of what we're going to be building into our products now and into the future, that should provide, and what we hope it does is provide our customers with a level of assurance that the products they're using are robust across all of these different dimensions. And it means that when it does come time for government to actively regulate these things, and that's happening globally, we're seeing this happen everywhere that we'll be able to demonstrate to the highest degree possible our commitment to safe and responsible AI.
[00:21:41] Speaker A: So how would you determine if something's biased or not? I'll give you an example. I'm obviously wearing a hot pink jacket. I'm more biased towards pink, perhaps, but I say green sucks. How does someone determine that? How does that look like? Because everyone's got different views and versions and beliefs and values. How do you get sort of a. How do you find that equilibrium?
[00:21:59] Speaker C: So let's take your example of I like pink. Saying I like pink doesn't mean grain sucks. That's the perspective that you have, and that is a valid perspective for you, but that's not a valid perspective for everybody. But does that mean that you're discriminating against green? Yes, you are.
[00:22:18] Speaker A: But how does that look like for me? Yes, but if you were to be objective about that, well, we have laws.
[00:22:26] Speaker C: Already in place that are directly relevant to this, and this is a fundamental part of the AI conversation and something that we speak to really quite a lot. AI does not operate in a regulatory vacuum. All laws that are already out there, anti discrimination laws, is a really great example for this particular conversation. They already apply. So when we're talking about what is bias, we can look to things like anti discrimination laws to guide us in terms of what bias is and what it looks like from that australian perspective, that is clearly based on race or ethnicity, sexual preference, gender, whether you're disabled or not. All of these things are defined as areas where bias may take place and where there are protections in place.
Being able to test for that, that is a key part of using any database system, not just an AI system. If you're using data to help you make decisions, absolutely on foundational principles of data analysis, you need to also be interrogating how you're coming to certain conclusions and really interrogating the rationale of your answers.
When I was in the intelligence community, we called that the analysis of competing hypotheses where you needed to, yes, demonstrate your test that a hypothesis was true, but you also need to test that it's not true to test your own cognitive bias, to make sure that you're not just following your own preconceived notions and thinking. And that is a central part of all data analysis.
[00:24:06] Speaker A: Do you think that's the part that people, when, I mean people just general people out there don't understand because they probably are making an assumption here that they're thinking, again, going back to the pink jacket situation, that pink is better, therefore green stocks. Do you think that maybe that hasn't been addressed as clearly as you've articulated it today? You mean on the bias side of things? Do you think people are fearful then, like, oh, it's going to be biased?
[00:24:28] Speaker C: Well, I think what's really critical about this is that when we're having this conversation, we're not actually talking about AI. At its core, it all starts with data governance and understanding data governance principles. So understanding what you're inputting into a system and understanding the potential outcomes as it relates to that. That is a really critical part of all of it. Always test your sources, the veracity of your sources, and test how you come to certain conclusions. That should always be part of a conversation whenever you're using any form of analytics.
[00:25:04] Speaker A: So is there anything both of you would like to wrap up with in terms closing comments final thoughts I love.
[00:25:12] Speaker B: To see the different technology companies that come to an event like this. Some of them are kind of big established technology names. Down on the expo call, you saw some of those that you recognized before.
And there's a lot of new players as well. I won't mention any specific names, but we're seeing a lot of innovation in Australia, specifically around cloud security and application security, and a lot of startup activity focused in that area. And thats really exciting for me. Its exciting because these young builders have the technology platforms that allow them to build really quickly and iterate these new ideas and get these products out to market faster than ever. And theyre choosing to do so in areas not only of cybersecurity that are fundamentally going to help everybody, but in very advanced forms of cybersecurity thats really aligned with where we see the markets going. So it's really nice from the big service provider perspective to sit back and see a lot of the positive messages that we've been putting out there. And a lot of the technology tools that we've been giving builders to help build with turn into new products that the 20 somethings of the world are really starting to found new technology companies based on. So I like to come to summit to kind of see that innovation cycle keep happening.
[00:26:21] Speaker C: For me, the really exciting thing about days like today is you get to see human imagination in full flight in terms of how our customers and how we taking these technologies that we're building and creating a different kind of world. And, you know, you and I went and saw the Lego set earlier today of a model tiny town and a demonstration of how smart cities are going to help drive sustainability and distribution of energy with renewables. That's an incredibly exciting thing to see tangibly at work in front of you. And these are the kinds of conversations that are so important to have, because what we're doing here today fundamentally is talking about how we're going to build the world of tomorrow. And I can't think of many things much more exciting than that.
[00:27:15] Speaker B: Plus the solar car, plus the soccer shooting, plus the AI modeling, plus all the fun stuff on the floor. So Sydney Summit is fun. We hope to see everybody in 2025.
[00:27:26] Speaker A: And there you have it. This is kb on the go.
[00:27:30] Speaker C: Stay tuned for more.