[00:00:00] Speaker A: As an industry, I think we're not doing a stellar job of communicating what confidential computing is or what confidential AI is. I think we're getting there. Technologists are Talking about it, CSOs are trying to understand, and CIOs are trying to understand what it is. Because right now it's primarily been a cloud play, you know, slowly transitioning from just being a cloud play to being an enterprise play. And there is a lot of heavy lifting that needs to be done in that environment to get it there.
[00:00:32] Speaker B: This is kvk.
[00:00:34] Speaker C: Are they completely silent as a primary.
[00:00:36] Speaker B: Target for ransomware campaigns, security and testing.
[00:00:39] Speaker A: And performance, risk and compliance?
[00:00:42] Speaker C: We can actually automate that, take that data and use it.
Joining me today is Anand Pashupathy, Vice President and General Manager, Product Assurance and Security from Intel. And today we're discussing securing the future confidential AI and cyber threats. So, Anan, thanks for joining and welcome.
[00:01:02] Speaker A: Thank you, kb. Very excited to be on your platform and I'm talking to you today.
[00:01:06] Speaker C: Okay, so maybe in your words, what is sort of the state of security and how do you see it with your background and your pedigree?
[00:01:14] Speaker A: You know, security. I look at it as a cat and mouse game, right, where the good guys are trying to ward off the threats from the bad guys or the bad people. And there are so many different kinds of threats that can happen at different intensity levels at a company level or at a personal level. It could be a phishing attack, it could be a ransomware attack for a company, it could be a denial of service attack. I mean, security attacks can be at so many different levels that each one has its own impact on the individual or on the company. And these are just going higher and higher. I mean, there are, there is enough data that if you Google, you can look for these, for example. You know, from an organization perspective, 72% of organizations report that there is a increase in cyber risks. And, you know, 42% of organizations have seen an uptick in phishing attack. Because at the end of the day, phishing attacks are just social engineering attacks, right? It's the human link that is a weak link when it comes to a phishing attack. So threats are continuing to increase. And CIOs and CISO specifically are doing everything in their power to kind of suppress the impact of security attacks on their companies and on their employees. And intel is doing its part, from a technology perspective to provide as much support as we can in our cpu, in our NPU, or our, let's just call them processing units in our client data center platforms to detect when something like that happens or even prevent it if such a case presents itself. So that's what I would say when it comes to the attacks that are on the increase. And I think you're going to start seeing an explosion of that with the usage of AI, which is what, you know, I'm sure you're going to lead me to eventually in this conversation.
[00:03:16] Speaker C: So given your level, what do you think? So you said CISO, CIOs, etc. What do you think their general consensus is in terms of what they're worried about? Because you've obviously worked across multiple different sectors and clients. I was always curious. Gets to get a bit of a synthesized view from yourself.
[00:03:33] Speaker A: See from an enterprise level, it's really exfiltration of data or exfiltration of ip. Right. They are, they are trying to make sure that data doesn't leave the company inadvertently, that you didn't want it to leave your company. On the other hand, you know, street sponsored actors are attacking, you know, the enterprises through, you know, their DMZ zones or whatever the case may be in order to stay dormant, to attack and leak information again. Right. So at the end of the day, it's all about leaking of IP or leaking of, you know, confidential data from an enterprise, which is what a CISO worries about. Right. At a personal level, I don't want my Social Security or the equivalent in Australia, for example, to be out in the open market that somebody can impersonate themselves as me. So I think, I think it, it works at all levels depending upon, you know, are you talking about a company or are you talking about it at an individual level?
[00:04:34] Speaker C: Well, I would say at a, at a company level because you know, these guys are getting, you know, first of all, they've got to hold the responsibility. Second of all, they're getting people questioning them about what's happening. So it's always curious to understand like how do they sort of see it. And as you know, there's more and more vendors than ever. People are calling them up all the time. They're trying to run, you know, their own sort of business, keep the lights on. And then again, they're trying to also think forward about AI and looking into that. So would you say that they appear overwhelmed more than before? Because like I said, there's so much information out there nowadays, it's hard to really understand who, who's sort of saying the right thing, et cetera?
[00:05:11] Speaker A: The good CEOs don't feel overwhelmed. For example, Brent Condren, he is not overwhelmed he understands what is the level of risk that, you know, intel can tolerate. He relies on defense in depth. Right. You cannot have a binary solution trying to protect you at the identity level, or somebody's trying to do a denial of service attack on you, or somebody's trying to exfiltrate information from the company out outbound. So he has these levels of protection across the entire span of, you know, zones that that intel has from, you know, inside the company all the way to outside the company. And, you know, he does a good job of making sure that the more secure information is protected at the highest level. And he has gradations of support as he moves forward. I don't think they are overwhelmed. I think they definitely see the risk of AI because just like people are using AI for good, there are bad actors out there who are going to use AI for not so good and they have to protect themselves against that.
[00:06:17] Speaker C: Okay, so maybe to elaborate on the overwhelm comment or question would be what I'm hearing. I mean, in our part of the world, maybe because we're Australians and we're behind, as everyone keeps telling us, would be that a lot of people out there talking about AI, but they're like, well, how do we use it? What are some of the use cases that's come up twice already this week with discussions with, you know, people of that, you know, size title. So do you think that, I know, like, AI has been around for a while, et cetera, it's becoming more ubiquitous, more vendors and service providers are talking about it more than ever. And, you know, we're at the coal face of really understanding that. So do you think it's more like, hey, now we've got a new sort of landscape to traverse through in the AI sort of world? Would you think that's part of maybe the, that's adding to avonomous? Because there's not necessarily, like you said, like a, there's no, like binary sort of answer to solving this.
[00:07:08] Speaker A: Let me give you a personal anecdote and then I'll tell you about my point of view or how I look at AI. So when I was going to school in the United States, kb, I did a master's thesis and it was based on AI. This was about 30 years ago, right where we were looking at natural language processing, we were looking at expert systems and all that. You know, when I, when I was looking for a job, I buried it so deep in my resume because I knew I wasn't going to get a job on the basis of my AI. Expertise. And look where we are 30 years later, where AI is everything. People are using AI. It's not that companies are not using AI. For example, companies have their own internal AI systems which is based on data that they are able to curate or allow their employees to curate to drive results faster. There is AI in development, there is AI in validation. There is AI in finding security vulnerabilities faster. So there are usages of AI like that that is happening right now because you see efficiencies while using AI. Now the question is, when you take AI from within a company to outside the company, or when hyperscalers like Google or Microsoft are offering AI solutions for people to use, that's when you have to really think about, okay, how is my data going to be protected when I use an external infrastructure like that, right? And there are different ways. And this is where, you know, I believe confidential AI plays a role in protecting data for companies or for institutions that want to protect their data. Where confidential AI is really the confluence of confidential computing and AI. If you look at confidential computing, the very simplest definition of confidential computing is, is how to protect data in use. Because if you look at the transitions of data, there are three of them, right? You either have data addressed or you have data in motion, or you have data in use for data address. You have encryption algorithms that protect it for data in transit. You have TLS algorithms that prevent it, that protect it end to end as you are transferring information from point A to point B. And data in use, there has never been any protection like that, because once you unencrypt your data from storage and you're operating on it, it's all in clear text. That's when intel led the market to define confidential computing within trusted execution environments. But we said, hey, you can actually do trusted operations inside a TEE or a trusted execution environment because it's very important for you to protect that, that operation from prying eyes. So that was a very simplistic concept that, you know, intel talked about about five, maybe eight, eight years ago. And now with the advent of AI, which is all based on data, right? There is data coming in when you are, you know, learning and data is going out when you're inferencing. And how do you make sure that you are protecting that entire AI pipeline all the way from learning to when it's getting interest. And there are several ways in which you can actually go collaborate when it comes to AI. For example, I can have my data shared with you. Let's say you are a company, I'm a company, we want to put our data together to get the best results out. I can send my data in an encrypted form along with your data to a data clean room where the processing actually happens inside this box, which is invisible to me and invisible to you. And the operation happens inside this opaque box and the results come back to me and the result comes back to you. There are several companies that are doing this right now, like Opaque Technologies as a data clean room solution, for example. So that's what we call collaborative AI. In the space of confidential AI. The other way to do it would be not to send the data, but keep the data within yourself. Where the actual learning algorithm comes to you, it operates on your data, it'll operate on my data, it'll operate on your data. And then the learnings or the inference of this gets put together in a central location without giving visibility of my data to you, or vice versa, improve the inferencing algorithm. And it keeps doing that in order to learn from my data and inferring from that without me having to give access to you, to my data. Right. So there are ways in which protection of or securing the data in the AI pipeline are starting to happen. The problem is, in my opinion, that AI is like this very fast car that people know will get you from point A to point B very quickly. But the car has a steering wheel and four tires and you know, places for people to sit in to go from point A to point B very fast. There are no seat belts, there are no indicators and there's no brake, there's only an accelerator. So we have to be very cautious as we use AI, especially when it's operating on, you know, enterprise class data or our ip, because you want to make sure that it is not the results or the data doesn't get attacked by prying eyes or you know, actors that you don't want accessing your data.
[00:12:53] Speaker C: Okay, what you said there, that is really interesting. Okay, I want to get into this a bit more. So when you said you used a car example around, you know, steering wheel, all that, you got to be cautious. So do you think with your experience people are being cautious or do you think that is trying to get there with like as fast as possible because it beats their competitor, etc. What are you sort of seeing on that front?
[00:13:13] Speaker A: No, see, I'm a security professional so I am trying to increase the awareness for people on, hey, if you want to use AI, make sure you're doing confidential AI. And that's why we are Going out, we have an ecosystem enablement strategy where we go out and work with the ecosystem. See, I'm a hardware provider. At the end of the day I give technologies in my hardware and I say, hey, if you want a trusted execution environment, you have two options. You have Intel SGX that allows you to protect your operations in a more granular fashion than if you want intel tdx which gives you VM level protection, so a confidential vm. So as a user who is processing data or AI data, we give you two options to use in order to protect your data and protect whatever operations you're trying to do. And on top of that, intel gives you a third party attestation service which is not attached to the person that is providing you with the infrastructure, like a Microsoft or a Google. It's a third party independent tester that says, yep, the operation that you said, you know what's happening happened inside a trusted execution environment provided by Intel. Right. So that is what we are trying to tell the world is, you know, gone are the days when you can operate things without confidential computing. Right. Again, I'll take you back. Historically did you, you know, you've been in this industry for a, for a while, kb we used to transfer information between people without ever paying attention to was that a secure link or a not secure link? We used to, I was at least trained to look for, hey, make sure it's not an HTTP connection, that it's an HTTPs connection which is a more secure connection before you sent anything. We are in that stage with computing where there is computing and there is confidential computing. And in my opinion confidential computing is on the path to become ubiquitous once all the hyperscalers and other instances in the enterprises do all of their compute inside a confidential computing environment, inside a trusted execution environment. And with that you get the safety of confidential AI that you know that any AI operation that you're trying to do either during learning or inferencing is being done inside a trusted execution environment.
[00:15:37] Speaker C: Okay, so you said before confidential computing is like on the path. So how do we get it more on the path then? So obviously there's some work to be done on the hyperscaler front, et cetera. It's just going to take a little bit more time. Obviously interviews like this help, but do you think it is going to become like this is just a thing that we're doing now in terms of look at the previous waves of evolution like virtualization, cloud computing, AI, quantum, et cetera. That's sort of coming up. Do you think this is just going to be part of it.
[00:16:03] Speaker A: No. All hyperscalers have stated that they want to get to ubiquitous computing with confidential compute. It's happening in China, it's happening with Google, it's happening with Microsoft. You listen to, you know, Mark Russanovich, who is the CTO of Azure. He talks to you about that. Hey, there will be a time very quickly on Azure where all computing will be confidential. There will not be a difference between this is a confidential computing environment and this is not right. Everything is going to be confidential compute. We are all collectively, as an industry, are on that journey. AI is helping accelerate that. It's pouring fuel to that because it's such a huge use case that everybody wants to get there because nobody wants to leak their confidential data out because somebody's processing with AI algorithms on it.
[00:16:54] Speaker C: So would you say, though, like, everything, everything talking about now is making sense, but I'm, I'm assuming that not everyone out there understands this and will know it's going to be ubiquitous now. Is it just going to be a matter of explaining to them in various formats like this so they can understand? Because sometimes we have this assumption in security, but also technology that everyone thinks the way we think and has the same level of experience and knowledge that we're exposed to.
[00:17:19] Speaker A: Yeah, no. As an industry, I think we're not doing a stellar job of communicating what confidential computing is or what confidential AI is. I think we're getting there. Technologists are talking about it, CISOs are trying to understand and CIOs are trying to understand what it is. Because right now it's primarily been a cloud play. It is slowly transitioning from just being a cloud play to being an enterprise play. And there is a lot of heavy lifting that needs to be done in that environment to get it there. So that's one case. The other thing I would say is when you think about AI, you immediately think about Nvidia and think of a graphics processor. Right? You think all AI happens with the gpu? Yes, it does. But at the end of the day, it's really a combination of the CPU and a gpu, like from Nvidia, that allows you to do confidential AI. And intel is working with the ecosystem of partners to deliver a solution where information from the CPU to the GPU is protected on the wire and it's protected with hardware components that intel just announced last year, last week, rather called TDX Connect. The whole idea is if I'm doing any processing inside my CPU and I need to accelerate it with a gpu, because there are AI processing that needs to be done. I can send it on this encrypted wire to the gpu. The GPU does the processing very quickly and sends the results back. All of that is encrypted and you get independent attestation, irrespective whether the processing happened on the CPU or the gpu. You get one attestation that says the entire pipeline has been protected. Now, this is available in the hardware, but the ecosystem is going to take a while to adopt this and deliver the ubiquity of this connection to all the enterprises. So the reason I'm telling you this is there are lots of innovations that are starting to happen and AI is accelerating that, but it's going to take time for it to be available to all the hyperscalers, to all the enterprises, have all the operating systems enabled to support it. That's the journey we are on as an industry and as a subsegment within that industry that focuses and pays attention to confidential computing and confidential AI.
[00:19:44] Speaker C: All right, so I want to explore this a little bit more in terms of the other side from your point of view. What do you think people just don't get about confidential AI, or that they get it wrong? Perhaps it's a sort of maybe correct some of the assumptions that may be out there.
[00:19:59] Speaker A: I think the biggest thing, I don't think people even think about confidential AI. I think it's only people like me who think about confidential AI right now, which is what I said is a problem. I think people look at AI and they go, wow, how can I use this to accelerate what I'm doing without even paying any attention to, you know, am I giving my personal data to the AI algorithm that I'm trying to use? I don't think, I don't think the layperson is even thinking about that. Enterprises are thinking about it because they are trying to put processes in place to prevent that from happening. Intel, our cto, Greg Lavender, announced a year ago that, you know, there is this, this, this Venn diagram of security and AI where he said, you know, there are. There are two things that are happening in the industry right now. You need security for AI and you need AI for security. So let me elaborate what I mean by that. So when I say security for AI, it's how am I protecting any AI algorithms that I'm running? How do I make sure that, you know, I'm, I'm doing a good job of protecting that. How am I using, you know, models and data across models with clear provenance? How am I enabling that to happen? Which is where technologies like intel tdx, intel, you know, SGX and all that comes into play. The other place where, you know, companies like intel are also focused on for security for AI is to make sure that, you know, any AI products that we deliver, there is robust security assurance done to that. Right? How do you make sure that, that, you know, it's, it's, we do all the processing internally to make sure if there is any sdl, you know, threats that we have to prevent or you know, we are not using any open source library which, you know, may be attacked if it gets included in a product. So there's a lot of governance and provenance that companies have to do to prevent the usage of AI, prevent the next, you know, the bad usage of AI in our product. So this is what, you know, I would call in the swim lane of securing the AI. Then, then you can also use AI for security. For example, intel has a product called Threat Detection Technology which uses AI on endpoints, like on a VPRO platform that it's able to identify that there's an ransomware threat happening and give those indications to an edr vendor like CrowdStrike or Microsoft Defender or anybody like that. So there are technologies like that that intel has and the software vendors have as well. And then also, how do you use AI to accelerate security assurance for your products? Right? How do you make sure that you are doing AI enhanced threat modeling? Remember I told you AI could be used for good as well. I can use it for security assurance like this, where I can talk about how do I use it for fuzzing, how do I use AI expert systems for general purpose security assurance. So that's how we look at AI at Intel being, you know, using it for security or securing the AI systems themselves.
[00:23:11] Speaker C: Okay, this is interesting. So what was coming to my mind as you were speaking? So just to go back for a second, you mentioned that like no one's thinking about this. So I appreciate your honesty, but just to go down that path a little bit more around, you know, people even thinking, hey, I'm uploading like sensitive information. I'll give you an example. I mean it's real basic one, but I was scrolling on Instagram reels and there was a doctor that was like, hey, like if you go and get your test results, what you can do is upload it a chat GPT and it'll give you some high level synthesized view of what that means. And people in the comments are like, yeah, that's such a good idea. So it's like, are people just not even aware even at like a consumer level? Like, hey, it's probably not a good idea because we don't, we don't know like the confidentiality, how that's being managed. Except absolutely.
[00:23:57] Speaker A: Because. But because that Instagram reel that you just looked at doesn't want to talk about that because that just causes friction from somebody using that little widget. Right. Why do you think countries are putting safe AI usage acts, or the EU has the AI act for safe usage of AI? I'm sure Australia has something similar to that because they're trying to create that awareness at a country level. Hey, be careful before you give your life away to an AI bot. And then before you know it, your whole life is on the Internet, right? So I think that awareness is going to come slowly, which is why, you know, if I, if I continue my, my car analogy and in countries like India, where I initially grew up, seatbelts didn't come till, you know, I think like 10 years. I left the country by then, like 10, 15 years ago. Right. Because nobody thought that was essential. I think we are in that same phase with AI and usage of AI that we, we look at this, you know, what we believe to be a tool that helps us become more efficient, but we don't have any understanding of how to make sure that we are protecting our data, protecting how we use it, protecting the result in a way that doesn't exfiltrate any of the data that we are putting into an AI system. And I think we as a race need to be really aware and be sensitive to that. I think people are trying different ways to increase that awareness. And it's going to come because in your example that you gave me about the Instagram reel, people are not thinking about who else will have access to that data. Maybe it's a bot from state sponsored agency and you don't even know.
[00:25:43] Speaker C: Okay, so there's two things in that. So going back to your seat belts analogy, So I have spoken about this before as well. So do you think then like originally the cars were made, there's no seat belts and accidents, people started dying, then they're like, we've got to put seat belts in. So do you think that AI, to continue your analogy, people are going to have issues and incidents are going to occur until maybe they start thinking a little bit more heavily about it? Sort of like you know when you're a kid and your mom's like, hey, don't touch the stove, it's hot and then you touch it and you burnt and you don't do it again. Do you think? Unfortunately, that may have to be the case until people start learning and maybe thinking twice about what they're doing.
[00:26:19] Speaker A: I think security events in the past are teaching us to be careful. For example, you know, even though there are phishing attacks out there, I still see phishing attacks increasing, but they could be even worse if nobody was doing education around phishing attacks. Right. I think people are using that, learning to be proactive about this and educating where they can. You know, I go out and talk to people about, just from the technology perspective, but how it can be used. The hyperscalers are trying to offer this. I think there is, there is enough awareness, at least in the technology industry to start, you know, talking about the goodness of doing AI processing in a confidential computing environment and not doing it, you know, just in an open environment. Right. So I have a hard time saying that, you know, something really, really bad happens for people's behavior to change. I hope as a collective learn from what some of the, you know, issues that we have encountered when we are not secure or we are not paying attention to instructions that are given to us and the bad results that come out come out of it to learn and make sure that we are being cautious and being aware before we just give our personal information to an AI chatbot.
[00:27:33] Speaker C: So then just to extend on that more so, you know, how we're sort of talking around like, people have to be more cautious about what information they give away. So what I've been seeing online as part of the reconnaissance that I do in my job is people like, oh, well, who cares? My information's already out there because I've been involved in three data breaches, or I don't care anymore because I want to trade my privacy for convenience. So would you say perhaps people or some. Sometimes, you know, companies are a different, different issue because they're, you know, regulation, standards and gdpr. But going back on the consumer front, do you think that people are just becoming desensitized now? It's like, oh, well, it's another breach. I'm already involved in three already. Who cares? I'm hearing that sort of chatter online. What are your thoughts about that?
[00:28:14] Speaker A: It's a hard question for a security professional to answer because I constantly worry about that, right. So when I get an email that I don't recognize, I. I make sure that I don't click on it without understanding what is the provenance of what I got. But I can understand if There is a lay person who's, who is, who hasn't been impacted by a security breach, goes, hey, I don't think this is going to impact me because I'm already, my name is already on three other breaches. Till it hits you personally, when people are actually doing a phishing attack on you or have taken money from you, you will not become more careful. And once that happens, I think, you know, I've seen people become immediate believers in making sure that they don't, you know, give access to their information willingly, right? Because when it doesn't impact you and you just hear, oh yeah, your name appeared in a Data breach of 30,000 record, it's not real to you at that time, right? It's just something out there and you, and you still are, are risking it. For example, you know, when you buy insurance for a car, you don't say, I don't want insurance because, you know, what are the chances of me getting into an accident, right? So you buy it because you want to protect yourself. And I think we need to have a more safer, you know, mindset with information and with data, similar to how we do with, you know, when we're driving a car, continuing the car analogy.
[00:29:33] Speaker C: So they're moving on now to infrastructure that is trustworthy. Keen to understand what do you mean? And maybe define sort of trustworthy. And how do you, how do you see it?
[00:29:45] Speaker A: See, I can extend the definition of trustworthy to a confidential computing environment, right? I can say a confidential computing environment is a trustworthy environment because, you know, nobody will have access to your data while it's being operated upon in the cloud, in the enterprise. And that is a trustworthy environment. Where I'm defining trustworthy as when data is being processed or in use, it is not accessible to anybody outside of the people that were intended to use it.
[00:30:19] Speaker C: So then how, how do you think companies should be sort of approaching this then, in terms of maybe their procurement process, et cetera? Like, is there any, is there any sort of framework that people should be following or adhering to with your experience.
[00:30:31] Speaker A: See, these technologies are, are showing up as default in all of Intel's processors, right? I can't go out and buy a processor without TDX or without sgx. It's just what is available. The hardware is not the issue and will not be the issue even in the future. Maybe it comes a little later than one company may deliver something faster than the other company, but eventually the market is moving to this where the challenge from trustworthy computing is how does it get Deployed inside an enterprise or inside a cloud. That is the challenge that this industry is seeing right now because that ubiquity isn't there yet and it's slowly starting to happen. And once that ubiquity is reached over the next n number of years, then it'll become clear that there will be no difference between, to just use your parlance, kb, a trustworthy environment and a non trustworthy environment. Because everything will be trustworthy at that point.
[00:31:35] Speaker C: Okay, so then how would you sort of get it deployed then in an environment?
[00:31:40] Speaker A: If I look at cloud environment, the hyperscalers will just deploy and they'll say hey, every, every compute that happens in my, you know, fleet is a confidential compute, right? There is no difference. I'm not going to do anything that is not confidentially compute. We're not there yet, but that's the path we're moving to. Similarly, in an enterprise, enterprises would say hey, if I want to operate on some data or I'm going to allow my customers to, or my, my employees to operate on data, I will do that in a confidential environment even within my enterprise. Right? And in order to enable that, you know, the hardware is just one element of it, but then you also need to deliver the entire compute stack on top of that that the operating system, the VM level, the OS and everything else that a hyperscaler has to put in order for the functioning of their fleet. That's how they are going to start communicating.
I'll give you a couple of real life examples of how this is getting deployed right now to make my point. So it's starting to happen. It's just not there yet completely in a ubiquitous manner. So Microsoft announced that they went and deployed their entire credit card processing in the cloud. In Azure, confidential computing, protected by SGX processes $25 billion a year of credit card transaction. It's PSS level one compliant solution. They save money by putting it in the cloud and not have to update their systems on prem because it's more expensive for them to do that. That's one prime example and you can read about it. I'm happy to send that link to you offline. The other example is E prescriptions in Germany as part of the National Health act. The German citizens can get their E prescriptions in a confidentially compute environment and that system processes about 2 million prescriptions a day.
So people are understanding the importance of protecting the data. As more and more of these examples come out, it is going to become standard operating procedure that any data that is acted upon or Data in use is going to be protected by confidential compute environment. Right. And it is our job as the industry to talk about this. We do that in, you know, in conferences and forums, talk to CISOs, talk to CIOs about it, and that awareness is starting to get generated. And as you see more and more of these examples come out, it's going to become more relevant. It's not just some esoteric use that somebody went into a, into a garage shed and did the work. These are mainstream applications that are getting deployed at either at a company level or at a country level.
[00:34:24] Speaker C: So given everything that we've spoken about today, I'm keen now to understand where do you think we go from here? What do you think happens now with the industry? Anything you can sort of share what's on the horizon. Of course we're going to want to see confidential AI really come into the fold. But what are your thoughts then?
[00:34:39] Speaker A: Looking, looking ahead, see, the technologies, will continue to deliver the technology, and I think we need to collectively talk about that more and more and talk about the before and after. What would happen if you didn't have confidential AI and what happens when you have confidential AI? And it will create that flywheel effect of awareness when more and more people and more and more enterprises, more and more companies are going to be using that. Because like I said before, AI is the fuel that is going to drive confidential computing, because there is just so much of data processing that needs to be done. So there needs to be availability, as I told you, so that needs to happen. The systems have to be ready in order to do confidential computing at scale and ubiquitously, and awareness needs to be generated that, you know, if you don't do it, it is going to cost you as a company, your data getting exfiltrated, potentially. Right. So I think those are the two things that need to go hand in hand as we get to that nirvana state of ubiquity.
[00:35:39] Speaker C: So, Anand, any sort of closing comments or final thoughts you'd like to leave our audience with today?
[00:35:45] Speaker A: The biggest thing I would say is think about how you are using AI and make sure that you're using AI in a safe and secure manner. The industry is working very, very feverishly where companies like intel are delivering hardware technologies in order to make sure that computing is secure and confidential. And the ecosystem at large is putting bits together in order to deliver entire solutions which allows you to have a safe experience with AI. It's not a question of if, it's really a question of when. All of that is going to be available at skills.
[00:36:28] Speaker B: This is KBCast, the voice of Cyber.
[00:36:32] Speaker C: Thanks for tuning in. For more industry leading news and thought provoking articles, visit KBI Media to get access today.
[00:36:41] Speaker B: This episode is brought to you by Mercset. Your Smarter Route to Security Talent Mercset's Executive search has helped enterprise organizations find the right people from around the world since 2012. Their on demand talent acquisition team helps startups and mid sized businesses scale faster and more efficiently. Find out
[email protected] today.