January 22, 2025

00:39:42

Episode 289 Deep Dive: Jarrod Lucia | Securing AI-centric Infrastructures

Episode 289 Deep Dive: Jarrod Lucia | Securing AI-centric Infrastructures
KBKAST
Episode 289 Deep Dive: Jarrod Lucia | Securing AI-centric Infrastructures

Jan 22 2025 | 00:39:42

/

Show Notes

In this episode, we sit down with Jarrod Lucia, Evangelist Architect, AI and SP Infrastructure, APJC from F5, as he discusses securing AI-centric infrastructures, specifically the emerging AI factories powering today’s workload. Jarrod delves into the rise of AI factories, large data centers purpose-built for AI workloads, and the advanced sustainability measures being considered in their design, such as green energy mandates and innovative water recirculation systems.

We also explore the unique security demands of AI factories, explaining the need for both traditional data center precautions and modern zero-trust models. Jarrod shares insights on the various stages of AI factory development and integration, emphasizing the importance of securing data throughout its lifecycle within these specialized environments.

As a seasoned industry professional with over 25 years of experience in Service Provider and Cloud Engineering, Jarrod has developed a unique blend of technical expertise and leadership skills. His background in Mobile and Satellite technologies has equipped him with a deep understanding of network design, implementation, and troubleshooting.

He’s passionate about staying at the forefront of emerging technologies, particularly Artificial Intelligence (AI) and its applications in Cloud, Service Providers, and IoT at scale. The convergence of Edge Networks, Cloud capabilities, and AI Factories will revolutionise industry and how applications are delivered and he’s excited to explore the potential of AI infrastructure and its impact on the future of technology and business.

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: There's a real hunger and a need from enterprise and companies to use AI for a good benefit. But how do we balance that against that sustainability? And one thing we're already seeing, which is I find super interesting, is a lot of places now are mandating to use green energy for building of these AI factories. Because the power requirements and the water is so stipulated and specified. We need to start looking at how we evolve green energy. It's almost chicken and egg, but how do we use AI to potentially get some breakthroughs in energy? [00:00:39] Speaker B: This is KVC as a primary target. [00:00:43] Speaker C: For ransomware campaigns, security and testing and. [00:00:46] Speaker A: Performance risk and compliance. We can actually automatically take that data and use it. [00:00:54] Speaker C: Joining me today is Jared Lucia, Evangelist, Architect, AI and SPE Infrastructure APJC from F5. And today we're discussing securing AI centric infrastructures, specifically the emerging AI factories powering today's workload. So, Jared, thanks for joining and welcome. [00:01:11] Speaker A: Thank you so much, Chris. It was really nice to be here today. [00:01:13] Speaker C: Okay, so AI factories. Now I've been hearing this a little bit lately. So tell us, what is an AI factory? What's your definition? What's your version? [00:01:25] Speaker A: So I guess lots of people would be hearing lots about AI at the moment. We hear about Nvidia, we hear about what XAI is doing, and we hear about, you know, a lot of the hyperscalers, what they're building. But also in our region in particular across apcj, we have a lot of countries building what we, what is being termed an AI factory. And really all that is is just really large data centers that are purpose built for AI workloads. And there's probably a few reasons why we're starting to see this happen. In our region. There's not a huge access to a lot of GPUs and so lots of countries don't always have access to AWS and Google and Azure to use those services on the scale that they want to use. So everybody's kind of busy buying these GPUs and then building these factories. And the term factory is I guess exactly how it sounds. Purpose built machinery inside the factory, things like CPUs, GPUs and what we call DPUs. So data processing units and all their jobs are basically to work together to process AI models as fast as possible. And being like a factory, you need essentially some raw materials to put in the, in the front of the factory and that's the data, the enterprise data, and the factory works on that with the models and these components and then out the other end Comes basically a business outcome. So whether they're developing internal processes for things like their own chatbots inside, they might be using it for business intelligence on a lot of data they have internally, or they may be using it for things like customer service service or those other types of things. So that's, I guess that's the definition I think of when I think of an AI factory. [00:03:21] Speaker C: And so would you say in your experience because there's so many like, and I always ask like, well, what's your definition? Because people have variations to the definition. So would you say that people are perhaps unsure what an AI factory is from, you know, obviously now like AI is like the main sort of topic of the conversation in a lot of my interviews, etc. So would you still say that people are trying to figure out what that means or people use terms sort of interchangeably or what does that look like? [00:03:47] Speaker A: Yeah, I think you're right. AIs probably become so topical and ubiquitous at the moment. But I think most people when they think of AI are thinking of, you know, the chatgpts, Clauds, you know, those, those kind of those interfaces that we use. Whereas an AI factory is, is a little bit different. It is purpose built and the people building these AI factories, generally companies, service providers that have access to power because power is really important to drive all the equipment, cooling because these things run really hot, and also connectivity, so access to large amounts of customer data or customers that are going to be able to use these factories. So I think we will see the understanding change a little bit as we see more and more of these, these factories starting to be built across the region. [00:04:43] Speaker C: And when you say change a little bit, what, what do you think is going to change in terms of the evolution of these AI factories or so. [00:04:49] Speaker A: In terms of change, where we're starting to see that change occur is countries, I guess, taking control of what they're going to use AI for. Take India for example. We do a lot of work on, over there around the, the AI factories and they're doing a lot of work for their language because you know, not all the, the English models developed that work in a lot of the world will work for the, all the different languages they have in, in India. So they are building these AI factories to start to make more meaningful models so that they can use those for, you know, like government services, someone contacting the police through an interface. You know, they can have a Hindi native language for them or they can have another type of dialect just for that person. And, and we see that Also in, say Indonesia with Bahasa, they're doing the same thing where governments especially, we see a lot where they're starting to understand that using these factories, they're able to get a lot of really interesting and good business outcomes for things especially like government, where they can improve people's lives through AI by using these factories to develop language models and things like that. [00:06:07] Speaker C: So one thing I'd like, I'm curious to understand, for people who perhaps aren't aware, what would you say the main difference between like an AI factory versus like a traditional data center? I mean, you sort of already explained it in terms of the cooling and all of that type of stuff. Is there anything so that really stands out? [00:06:22] Speaker A: Yeah, so there is probably data centers generally, and I'll classify them generally house lots of different functions, typically for IT workloads. So they can host cloud functions, they can host databases, they can host a lot of different types of workloads. AI factories are very, very specific, even to the point where the AI factory or the data center that is built for them is very specific around the types of cooling that are being developed. A lot of these new data centers are actually water cooled, which is actually pretty interesting. It's a bit like your car. They put a radiator into the rack and then they have cold water and hot water and they're passing water over all the equipment to cool it because it gets so hot. So that's probably one big difference. The power requirements are a lot larger than you would have in your normal data center because the equipment and these GPUs and the servers they're running, the scale is, it's much, much bigger. And probably the third point is the data centers built for these purposes are purpose built for AI factories. They generally don't have other workloads kind of running in the cloud services or any of those types of things. So that's where we're really seeing how these are becoming quite different and actually quite specialized. [00:07:48] Speaker C: Okay, so one of the things I'm interested, as you were talking always came to my mind is now I hosted an event for Equinix. As you know, you would obviously know who they, who they are and what they do. And they spoke a lot about like sustainability. And then I think I was at an event with NetApp and they were talking about sustainability. But then on the consumption front, like how much, I think someone said, I'm like, I don't want to say exactly the how much water like AI actually use. When you even like one prompt, you like chat, gbt which is like quite basic. Right. So where do you, how do you think that's sort of going to go now with these AI factories? And just the way the world is moving people considering this as, as a factor or, or as a main factor, would you say? [00:08:31] Speaker A: 100%? I, I, I think, you know, all the, the customer conversations and all the, the conversations I've had with sea levels across the region, it's actually becoming a really important factor. And it's a couple of things you said there around not only sustainability, but you know, how do they take this forward when there's a real hunger and a need from enterprise and companies to use AI for a good benefit. But how do we balance that against that sustainability? And one thing we're already seeing, which is I find super interesting is a lot of places now are mandating to use green energy for building of these factories because the power requirements and the water is so stipulated and specified. They are saying, look, you know, we can't take power from potentially old methods. We need to start looking at how we evolve green energy. How do we potentially, and it's almost chicken and egg, but how do we use AI to potentially get some breakthroughs in energy? And there's some customers we work with in Malaysia, for example, they're now building really large solar farms for the specific purpose of powering these AI factories. They're also doing a lot of water recirculation. So the hot water that comes back out of the AI factory goes into large radiators on the ground and they're, they're cooled by natural air and then it's pumped back into the factory. So it's not, it's a lot of these systems and, and some of the interesting development even outside of it is, is impacting these IO factories. [00:10:12] Speaker C: Yeah, that's interesting because I think that's something that I'm starting to see more coming through the conversation now, the sustainability front in terms of, even, even for data centers. But you know, around AI and what I mean, you might know this yourself. But another, I forget, I think it may have been NetApp that mentioned it at their event. Talking around specifically the APJ region, customers are very focused now on sustainability in terms of vendor selection. Are you seeing that? [00:10:38] Speaker A: Yes, we are a lot of the work and it's, you know, I've been in the industry for a long time and been particularly on the service provider side and service providers have had this for a little while in RFPs, but now it's becoming, you know, it's more one of the top of the list things is they do want to understand, you know, what is your power consumption, how many watts do you use, how energy efficient are you? Can you run in low power systems? Can you run on ARM processors? Because they can be more lower powered than the next 86 architectures. So yeah, I think there's a lot more innovation and maybe pressure is the wrong word, but there's certainly a push from everyone in the industry to become more energy efficient, more green in, in terms of what we do. You know, some of the, the work we're doing with an Nvidia partnership is developing a way to deploy software on, on much smaller devices, but also those smaller devices trying to, you know, go towards one terabit in terms of speed like you would have a normal large chassis. But this device is a NIC card. So you know, there's a lot of those innovations starting to happen. I see and I think we'll see that across the industry. [00:11:56] Speaker C: So I want to slightly switch gears now. Now, Jared, you say that AI factories functioning as the new data centers, their security demands are unique. So because this is cybersecurity podcast, I'm really curious now to get into this and how does this look and how do you see this? [00:12:15] Speaker A: So this is a really, this is an interesting one for me. I, I guess because for me it feels a little bit like we're going back to the Future. You know, 20, 25 years ago, maybe I'm showing my age. But you know, we used to have data centers where we used to do a lot of hosting. You know there were, there were security requirements around, you know, DDoS and firewalling and then multi tenancy, how you, you divided up those hosting resources, did you give a customer a full server? And, and then you have network segmentation and separation. Are you doing sharing on the same server? And you know, we, you did VMS and again you had that network separation and other things. And so going back to the future like that, we're now building these, these really high powered data centers. But some of those fundamental security considerations, they, they don't change. And interestingly what we found probably over the last 10 to 15 years with rise of cloud, a lot of people have become very good at cloud security and the cloud security models, zero trust and those types of things. And so what we're seeing is this crossover of security where we have that traditional data center security which we know is important, DDoS security at the front door of the data center. But also now those zero trust cloud models are also being laid across because These factories are multi tenanted in many ways. You know, there'll be many customers, particularly government departments using these types of factories, things like storage network segmentation. How do you separate and not let different customers inside of kubernetes for example, see each other. A lot of these are now the new security ideas that we have to now put into these AI factories. [00:14:09] Speaker C: And so when you say put in, are they not sort of embedded in at the start or is it sort of. I know you said the term back to the future, but is it now that people sort of taking a step back say well okay, we need to your vernacular put these security measures in place now or are they being considered? Or how is that conversation going, would you say? [00:14:27] Speaker A: Do you know? I think the conversation is, is actually in the right phase. I guess a lot of these AI factories and maybe just to correct, I guess where we see a lot of them, a lot of them are still in the design and build phase. So some of these factories, they're not up and running yet. And so what's really interesting about that is we're already having the security discussion before it kind of goes in. It's not, not this bolt on security like we've had potentially in the past, which I think is a really good way that security is being designed into these things even before they get completely built or turned on. The other interesting point about it that I, I've seen in the service provider world, but I haven't seen it in the enterprise world so much before, but definitely seeing it now with AI factory build is that customers are taking a really interesting design approach where they know what the size of their end data factory will be, they know how big it will be, whether they have 4,000 GPUs, 16,000, whatever it is. And what they're doing, which I think is really smart, is they're taking a small amount and they're doing a really dedicated specific production POC essentially and that involves all elements. So it involves storage, network security, automation, all of those things. And what they're really trying to do is figure out on a small scale, maybe say with 50 or 100 GPUs, how all the engineering and how all these pieces will fit together. Have we missed anything? Do we need components in there? Are there security layers we've missed? Once they get through those POCs, then the large scale build out will happen. So I think that's a really interesting part that we're seeing across the region as well. [00:16:16] Speaker C: So just going back to you said they're in the design and build phase of these AI factories. When do you think they'll start to be, you know, more ubiquitous and this is what everyone's doing. Do you think it's going to take like another, how long do you think that's going to be until we were sort of in this new era of AI factories and the new sort of modern data center, if you want to put it that way, when, when you start to see that sort of shift come into the market, would you say. [00:16:43] Speaker A: I think we'll start to see that shift next year. We've worked with a lot of customers this year and that it definitely feels like the design phase was this year where there's active POCs, there's a lot of consultation, a lot of bringing together multiple parties to make sure that the design phase was good. I think next year we'll start to see companies, and particularly regions start to really ramp up how quickly they build these AI factories. If you look at Xai for example, they built the hundred thousand colossus cluster in, in 19 days or 22 days or something like that, which was a massive, massive effort. And obviously, you know, they want to be first, they want to have those, those type of numbers. But definitely the type of engineering that went into that build is the same type of engineering that we will see across our region in these other data centers, just at a smaller scale. [00:17:40] Speaker C: What about adoption towards these factories? Now? I ask this because like we've seen over the years, like when the Internet came out and then obviously with cloud and then people working from home and all of these sort of changes. Would you say in your experience, Jared, that people now, with everything that we've seen, even in the, you know, the tech world last even five years in terms of adoption, like people are adopting things faster than perhaps they traditionally used to in terms of their mindset and their approach? [00:18:09] Speaker A: Definitely I think AI is one of those things that it almost forces the adoption, to be honest with you, and you're probably the same. I don't think I've met anyone in the last six months that doesn't want to talk about AI and what it is, how they can use it, how they can derive value, how they can protect against it, all of those types of things. So I think the adoption is definitely there. The adoption though, I guess is more focused on, you know, anthropic OpenAI and the cloud providers. At this stage we do see for many customers there's basically a three way approach. They generally test concepts in the cloud because it's easy, it's fast, you can start to test the concepts. Now AI requires a lot of data. And so the second phase is normally when they're building these POCs and kind of testing scale and things like that. And this is more a hybrid type of approach because generally most enterprises have a lot of data. And you mentioned NetApp before, where their data is in NetApp storage and so they need to connect that to where the model is running or bring the model to that data. So there's this hybrid approach and then we think that third phase, once these AI factories are back up, is where that'll start at scale. And that scale really then depends on the type of work they're doing. So are they thinking of actually training like a fundamental model, maybe like a language model for a lot of the countries in our region, or are they happy to use a model that exists, but now they want that model to do the work for them? They've got all this data and they'll start to get that work done. In talking actually with a lot of, of security wise, I think a lot of them know that AI is being used, but don't necessarily know how it's being used. Is it safely being used in their organization and you know, what are the kind of costs and regulation around that use? So I definitely think some of those things play into the needing an AI factory locally in region because some of those things are taken care of for you. [00:20:26] Speaker C: So in terms of customers, do you think that some of them are still unsure on how they can leverage or fully leverage an AI factory? Because again, it's still relatively new. Right. So it's not like, oh, you know, I've got 20 years experience in this, so are you still finding that customers are finding their feet around how to wrap their head around it? What, what are you seeing on that front in terms of the commentary? [00:20:46] Speaker A: Yeah, so we, we definitely get a lot of questions around, around those pieces. So you, you make a really good point saying this is great, this AI factor or this capability is available, but now what do we do with it? What we actually are finding is a lot of the work is actually on the flip side a little bit again probably like a telco or a service provider where they've built mobile networks, they've built all these networks and then they have to provide essentially an end to end solution. So we are working with various customers that are building the AI factories and definitely the logical question from them is okay, we know enterprise want to do these things, but they might not know how to get started, they might not know how to use our factory. And so what they're doing is they're building packages to then take to enterprise like banking and finance and others to say, you know, we've got the factory, we have connectivity options, we even have different types of LLMs for different business capabilities. You know, let's connect your data and then they can start to do the work. So that's, that's more of what we're seeing where enterprise definitely wants to use it, but exactly as you say, might not know how. Whereas the AI factory, they've built the factory, now they have to build the capability for companies to use that factory. [00:22:10] Speaker C: And that's another thing I wrote an article when I for NetApp. Like they're sort of encouraging their customers to build their own AI factories. What are your thoughts in on that? [00:22:19] Speaker A: So, and I think this comes back to that definition of AI factory as you say. My definition I guess is a little bit different. Well, I guess from the perspective of the customers that I engage with, most of the companies that I work with across the region are spending upwards of 100 to 200 million dollars on, on a single AI factory. So the money is huge and a lot of that investment is actually co investment with governments and things like that. So there's a lot of value that obviously government is going to derive I think on a smaller scale. And I think I get where we're netappa talking around companies building their own AI factory. I think the concepts still remain the same. You need, you need that front door, you need capability around CPUs, GPUs to process and you need access to an LLM and essentially you need to connect all that data together. So on a smaller scale definitely you could build, or enterprise could build their own and that building might be renting from the cloud, might be renting from one of these AI factories, but just at a smaller scale. So in my mind that's, that's how I would see kind of that idea playing out. [00:23:38] Speaker C: So you mentioned before, AI factory front door. What does that look like? [00:23:42] Speaker A: So that one is more like our traditional data center. And think of it exactly like our data centers from 25 years ago. We basically had, you know, big routers, big firewalls, big network pipes to funnel data or funnel information into and out of these factories. The really large difference from an AI factory point of view is it's essentially data coming in, being processed and then being returned. So don't think of it like a data center or a cloud that's connected to the Internet and it's kind of hosting those services. It is a very specific factory that very large amounts of data come in, get processed, and then push back. The big difference, actually, we do see just on a networking front, is that the speeds of the networked components to process these AI requests, they're pushing very, very fast. Most of the AI factories Now, minimum is 200 gig links, but they're more talking 400, 800, you know, 1.6 terabit links, just links between servers. So it's, it's that kind of speed and scale that the AI factories are building towards. [00:25:00] Speaker C: So what do you think people don't get, perhaps, about AI factories? So, like I said, it's different definitions. It's relatively new in the market. People are still wrapping their head around it. But is there anything that sort of stands out, perhaps, that you want to sort of debunk? [00:25:14] Speaker A: That's a really good question. In my honest opinion. I think a lot of it is exposure, you know, in different geographies, we're all at different stages of using AI as well as thinking about AI factories in general. For example, you know, India, for example, they have a mandate from their government that they want over 10,000 GPUs in country to be used for their own internal use. And that's in their. It's in their public government strategy documents. And I guess that's one thing that we see across the different regions, is that everybody's kind of at a different stage. You know, in Australia here, we're a little bit lucky. A lot of the clouds that we have access to, they have GPU resources, so we can access it pretty easily. We can start to do some of that work. So. So maybe the concept of an AI factory in Australia is a little bit different than it is for someone, say, in India or in Thailand or Singapore or those kind of countries. I think that's where we see the different definitions. A little bit like our telcos, where, you know, a lot of the telcos in different countries are at different stages of how they implement 5G or something like that, for example. [00:26:26] Speaker C: And where do you think Australia sort of sits on the. In the pecking order? [00:26:29] Speaker A: I wouldn't maybe say it's a pecking order. I think Australia, we have a bit of a lucky position because we've. We've got quite a lot of cloud resources and access, obviously, because we have a close relationship with the U.S. so, you know, there are AI factories being built in Australia. They're more around, you know, research and a more traditional AI approach, I guess. I think enterprise will start to be looking around for more capacity for processing in Australia, especially as more and more enterprises get comfortable with how do I secure my data, how do I put all of these AI capabilities together? And probably as resources become a little bit scarce. So as more and more enterprise use the resources we have safe from the cloud in Australia, they'll become either more expensive or unavailable. And so I think there's, there's opportunity around other companies building and Equinix, you mentioned before, you know, I'm sure they have a strategy around hosting and being able to provide AI factory kind of services when, when different enterprises want to use them. [00:27:46] Speaker C: So with all these new innovations that are coming out, there's always in the back of my mind the risks. So what do you sort of see, Jared, in terms of risks around AI factories? [00:27:59] Speaker A: Yeah, so the, the biggest risks that we see at the moment probably data security at the moment, because there's several points where data becomes, I wouldn't say mixes, but when data is collocated, maybe that's a better way to say so. So to get your data into the factory, you either have to move it there or you have to securely connect between the factory and your data. So they're the points where, where your data, you need good security, you need your data to be secured. The other thing about the processes of AI factories today is that it's a little bit like when the Internet first started. When the Internet first started it was HTTP, we didn't have HTTPs and then people went, hang on, we can see everything in clear. We now need to encrypt the traffic. And then we had to change the algorithms. And you know, it's, it's a bit of a. Security becomes implemented at every stage in the factories at the moment, customer data that sits wherever it is, whether it's in the cloud or on the premises, typically those are really pretty well secured. The data is encrypted at rest, even in flight, inside the clouds and things like that. The data is encrypted. Now when it moves into an AI factory to be processed, that data, at some point you could securely move it into the factory so it's secured at rest before it leaves and it's secured as it moves. But once it's in the factory and it's being processed, that data has to be, has to be clear. So a lot of the protocols running inside of these factories are HTTP, so they're unencrypted because the factory is all about speed. If you put HTTPs on @some point, something has to unencrypt it, process it, re encrypt it, and that takes away speed. So at the moment, in my mind, there's, you know, we have to build capability and layers around these data movements as it goes, particularly the data, because that's the important piece. The other part that becomes important is, say a company has done a lot of work to build their own model. They now want to use that for business purposes. So they want to do inference, they want to question the model. That model needs protection because, you know, there's things around model theft. There's things about, if I can get a model, I can start to uncover how that model was built. I can potentially put something inside of it. So that's the other piece as well, and probably back to the same original point. All of this is unencrypted at the moment. So to me, it feels a little bit like when we started the Internet and we kind of know what's coming. We know those things definitely need to be encrypted. But how do we do that at speed? How do we break those technology barriers to do that? I think we will in time. But until we get there, we definitely need layers of security. We need to be able to protect the data at rest, not when it's being processed, but then when it's back at rest in flight. We definitely need to protect the models and we definitely need to protect all the APIs that everything hangs together and talks with each other, because that's the other piece of this is everything in an AI factory. It's exactly as it says, it's a factory. It's all machine to machine. So it's API everywhere. There's not people in there creating commands or telling anyone what to do. [00:31:29] Speaker C: There's a couple of things that you said which was interesting. So going back to the speed, which obviously, you know, everyone wants to. Everyone wants to do things fast. I get that. But how much slower potentially would it be if it was like, okay, it's encrypted, then unencrypted and encrypted, like, is that really going to slow it down that much, would you say? [00:31:49] Speaker A: For a lot of the AI factory providers, yes, it can slow it down 30 to 40%. Now, if you look at how long Meta took, it took six months and I think $200 million to train llama 3. If that took them nine months, it would take them, say, another $100 million. So speed is money. I think that's the big key for the AI factories, and we have to work on security. But we also have to work on some technological breakthroughs to say, how do we do this encryption thing? Is there something new we can do? Is there a new capability we can, we can develop that gives us the same set of security, does that encryption, but we don't have that same speed penalty. [00:32:36] Speaker C: So. Okay, so what's interesting is with going back to, you know, the encryption side of things at the moment, you're saying that there's all of this, lots of data in these AI factories, which what they need is not encrypted. [00:32:49] Speaker A: Let me clarify that. When it's in the factory, sitting in the data store, it's encrypted. It's encrypted at rest, like we do in the cloud on premise, any of those things. So the data is encrypted at that point. It's when we go to do something with the data that we have to unencrypt it. We have to send it into the GPUs in the factory to get processed. At that point it becomes essentially unencrypted because the model is having to look at that data. It can't. And today they don't have the capability to, to read encrypted data. And perhaps that's. And you look at OpenAI and anthropic and Google and a few others, they're definitely looking in that direction as well, is how do we make the models more secure so model theft and other things don't happen. But also how do we make it so the model can start to read encrypted things? There is a Anthropic just released a protocol called Model Context Protocol. And its job is to start to join together the model and all of the different data sources it needs to read to do its job. And that the idea of that protocol is to make it fast and to add security. Now that's only just been released, so people are definitely thinking about it. But we've definitely got to make sure that in the interim, before these stuff's widely adopted, we have, you know, those layered security controls around, you know, the factory front door. You need API security because These are all APIs. You can see them. We definitely need network security, we need network segmentation, we need zero trust, all of the good things that we should have anyway, but we just have to implement them in a way that, that offers the best speed. [00:34:36] Speaker C: So I just want to talk about. So you mentioned before around like collated data and you know, there's a whole conversation around that. Where does the whole sovereignty piece come into it? Because I mean, I don't hear much about sovereign capability or, you know, data has to be here. I don't hear much. I mean, that was probably very hot topic, like maybe 12, 18 months ago. Now I feel like people aren't really talking about it. So with everything you're speaking about today, Jared, how does that piece sort of, you know, work its way into the conversation? [00:35:05] Speaker A: So I think there's probably two parts to it, and, and it's a good, it's a good example because traveling around the region, data sovereignty means different things to different countries. In a lot of our neighboring countries, data sovereignty means that they don't want the data to leave the country, so they don't want it on foreign clouds. They want it in, you know, government data centers, those types of things, and they want it processed at a sovereign AI factory, which means that factory has to exist in the country. So that's, that's kind of what they mean. Other countries, like Australia, for example, and I think you're right, 12 to 18 months ago, there was a large conversation around it. And we are a large user and a forward user of the cloud and other services. And I think the conversation came around because people were just asking the right questions, like, where is my data? Is it still in Australia? Is it not going around the world? All of those types of things. And I think a lot of, we've had a lot of process around how that works for us. But data sovereignty, I would also extend it to different enterprises. Say, for example, two banks in Australia, they have their own data and they use potentially the same AI factory. That data going into the factory needs to remain sovereign to each of those enterprises. You don't want that data mixed, and you certainly don't want, when the models are being used that that unencrypted data is somehow seen by the other person as maybe nefarious as that sounds. I think that's the, the two definitions that I would have of, of data sovereignty. [00:36:47] Speaker C: So where do you think we go from here? What do you think's gonna happen next? Obviously, we're sort of, you know, we're entering into a new era as well as, you know, new territory things that you discussed today. So do you have any sort of, what do you think's on the horizon as we sort of progress now into the new year? [00:37:04] Speaker A: I think we'll see next year a larger build out of AI factories. You know, in terms of those data centers? I think we will also see a lot more focus on security of AI and AI models and how people use it. I think this year was, at least with the customers that I've talked to, this year was a year of experimentation and understanding POCs, those types of things. I think many organizations can now see what they want to use it for, how they're going to get there, but also what steps they need to take to secure it and do those things. So I think next year we're going to see a big build out of these AI factories, but also more momentum in enterprise using AI rather than just, I guess as we've seen kind of this year and the previous six months before that, you know, more things, you know, chatgpt and those things. I think we're getting down to the business of using AI for enterprise. That's what I think we're going to see next year. [00:38:08] Speaker C: So Jared, do you have any sort of closing comments or final thoughts you'd like to leave our audience with today? [00:38:13] Speaker A: It's really interesting. I find the whole industry quite interesting is that, and I mentioned it before, feels a bit like back to the future. I think there's only a few times in at least tech careers where there's a fundamental change. You know, there's a fundamental change when the Internet came, there's a fundamental change when cloud started to come. And this is the next fundamental change is AI coming in. And I think it's going to surpass all of those previous to it. So I think that's probably what I'd leave people with is yeah, watch this space. I think it's a pretty cool time to be in in IT, security and networking. [00:38:59] Speaker B: This is KBCast, the voice of Cyber. [00:39:03] Speaker C: Thanks for tuning in. For more industry leading news and thought provoking articles, visit KBI Media to get access today. [00:39:11] Speaker B: This episode is brought to you by mercset. Your Smarter route to Security Talent mercsec's Executive search has helped enterprise organizations find the right people from around the world since 2012. Their on demand talent acquisition team helps startups and mid sized businesses scale faster and more efficiently. Find out [email protected] today.

Other Episodes