March 27, 2024

00:38:41

Episode 251 Deep Dive: Mandy Andress | Charting the Path of AI Innovation and Security

Episode 251 Deep Dive: Mandy Andress | Charting the Path of AI Innovation and Security
KBKAST
Episode 251 Deep Dive: Mandy Andress | Charting the Path of AI Innovation and Security

Mar 27 2024 | 00:38:41

/

Show Notes

Mandy Andress is currently the CISO of Elastic and has a long career focused on information risk and security. Prior to Elastic, Mandy led the information security function at MassMutual and established and built information security programs at TiVo, Evant, and Privada. She worked as a security consultant with Ernst & Young and Deloitte & Touche, focusing on energy, financial services, and Internet technology clients with global operations. She also founded an information security consulting company with clients ranging from Fortune 100 companies to start up organizations.

She is a published author, with her book Surviving Security having two editions and used at multiple universities around the world as the textbook for foundation information security courses. Mandy also tested and reviewed information security products for multiple publications as well as serving as the author for the weekly InfoWorld security column. She has been a sought-after expert in the field, speaking at signature security conferences such as BlackHat and Networld+Interop. In addition, she has taught a graduate level Information Risk Management course for UMass Amherst in the College of Information and Computer Sciences.

Mandy has a JD from Western New England University, a Master’s in Management Information Systems from Texas A&M University, and a B.B.A in Accounting from Texas A&M University. Mandy is a CISSP, CPA, and member of the Texas Bar.

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: I think there's a lot of debate right now on the speed of innovation and how do we do that in a mindful way, in an ethical way. And we look at a lot of things that we're trying to address in that world. And I believe that at least at the moment, there will be a bit more practical approach in moving forward. On the flip side of that, certainly from a security perspective, threat actors aren't taking that same approach. They are quickly researching and understanding what they can do and how they can leverage. And so it's going to be, again, that balance of how do we move forward comfortably, safely as a society, but knowing that there will be parts of society that don't follow those rules. And how do we balance that? [00:00:53] Speaker B: This is KDCAT as a primary target. [00:00:57] Speaker A: For ransomware campaigns, security and testing and performance risk and compliance. [00:01:02] Speaker B: We can actually automate that, take that. [00:01:04] Speaker A: Data and use it. [00:01:07] Speaker B: Joining me today is Mandy Andres CISO from elastic, and today we're discussing how securely integrate enterprise data with OpenAI and other llms. So, Mandy, thanks for joining and welcome. [00:01:19] Speaker A: Thank you. It's great to be here. [00:01:21] Speaker B: Now look, this is such a big topic, and I was literally just in an interview before this talking about AI, and I really maybe want to start just with your view on where people are sort of at on this from your perspective, and what are you sort of hearing from customers and sort of the broader community? [00:01:37] Speaker A: Yeah, I think AI has certainly been a hot topic and everyone's been looking into it. And a lot of the folks that I'm speaking with, whether it's other cisos or folks looking at how they could utilize AI within their business, a lot of folks are in the, let's try to figure out where it makes sense for us to use it. A lot of investigation, a lot of learning, a lot of trying. It's just a lot of different things happening in that space these days to see what customers react to, what provides value and how they can best leverage it. [00:02:08] Speaker B: Yeah. Okay. So a couple of things then on that point, you said best leveraging it, I believe, and maybe you can have a better view on this than me. Do you think that's the part that people are still confused on, like how to leverage it? Because it's some of the things I'm seeing in interviews, but then also in content with other media publishers out there and just people on social media and friends. Do you think that's still a question mark for a lot of organizations and a lot of people? [00:02:33] Speaker A: I think there's still a lot of questions on what is the true current capability of AI and Gen AI when we're looking at that, when we take into account some of the security concerns and privacy and data risks that go with it, but more specifically, how could it work within their environment? So we hear a lot of discussion on customer service, customer interactions. Customer support is a significant use case that a lot of organizations are trying Gen AI technologies out security as well, being a CISO, a lot of additional analysis and faster analysis and improved analysis that we're researching and trying out to see if it will be able to be a benefit to us. And beyond that, there's just a lot of creativity and a lot of interest. And so I think there's still a lot of outstanding questions, as you referenced, just on how we could use it. But I think a lot of that will become clearer over the next year, at least some of the initial use cases. And then I'm always fascinated by the creativity of what people are able to find and use technology for. [00:03:39] Speaker B: So let's follow the creativity sort of comment then along a little bit more. What do you sort of envision happening over the next twelve months? And you are right, I think there are still a lot of outstanding questions, probably because it's still relatively early days, even though AI has sort of been around. But if you want to look at it a little bit more ubiquitous, of course people are asking those questions. And perhaps if we sort of roll it out to more consumer based sort of everyday people, perhaps those are the questions that are coming through from an organization standpoint. But where do you sort of see the creativity side of things like sort of panning out over the next twelve to 18 months? [00:04:11] Speaker A: I think the creativity is going to continue to significantly increase. I listen to a lot of podcasts and just have heard a lot of podcasters utilizing Gen AI and making it available to their listeners to be able to search through or get information from the back catalog and previous podcasts that they have had folks looking at it to improve how they are just researching, or I should say starting to research to see how that could expand all of the areas that they would look into and give them insight into some pieces or sources that they did not have any insight or visibility into before. And that is some of the most basic to significant implications, implementations from larger organizations, whether it's consumer facing or business to business interactions. To me, it's either going to go two ways over the next twelve to 18 months. It's going to be significant amount of growth implementation the speed a year ago we didn't anticipate that we were going to be talking this much about Gen AI. And if that speed continues, there's a lot of things we're going to be talking about a year from now that we can't predict the future and just can't see coming our way. So either going to take that route or we're going to hit that point where, okay, we talked about this a lot, it's really interesting, but we're just not quite finding the use cases yet. I don't think there's going to be a lot of middle ground. [00:05:38] Speaker B: So when you say middle ground, what do you mean by that? [00:05:40] Speaker A: More of the we're implementing it, it's beneficial and we're leveraging it. And I was kind of here, and we have good uses for it, and this is how it's going to work in our environment. I think we're going to have folks that are continuing to try new things, continuing to have those creative ideas, or it's going to be, hey, we have a lot of ideas and the technology is just not quite there yet for what we want to do, which will drive all the further innovations that we know will be coming. So I'm curious just to go back. [00:06:12] Speaker B: A step on your example around Gen AI, around podcasting, considering have a podcast myself. So what does that sort of look like from your perspective? How can people start leveraging that? I find that really interesting and I'm keen to hear more. [00:06:25] Speaker A: Yeah, the ones that I've heard talking about it and utilizing it, they are just utilizing chat bots, whether it's chat GPT or whatever model and framework that they want, or building their own off of their library and back catalog. And they have a website related to their podcast, just putting that there and allowing people to search through and get the information, some behind subscriber walls if they have that set up with their podcast. But it's a way to we talk about data and search being the core of how we understand and sift through information and looking at what AI has provided to us, it's made much more realistic and reasonable natural language processing, the ability to interact with the system much more similar to how we interact, human to human. And so it's not having to necessarily understand syntax and rules and certain parameters. When you're talking about prior search, it's just ask a question. Like you would wonder. I have three kids, they ask me lots of questions every day, and it's now easier for them just to go type their questions in and get an initial answer. Yeah, absolutely. [00:07:39] Speaker B: And I think that's the power of the whole large language models, because you can ask those sort of everyday questions rather than asking it in a way of a certain syntax, which maybe is more technical, that everyday people just won't know how to actually pose that. Is that where you're going to see sort of a shift towards now, even going on the chatbot front, that's really interesting, something I need to look into myself. But does that mean that people will just do that in lieu of listening to, for example, this podcast then directly, when they can just get the synopsis? Or how do you sort of see that panning out? [00:08:12] Speaker A: I don't see it as a replacement. I see it as I know for myself, there's podcasts that I love to listen to. I listen to all of them, but there's things that I remember hearing, but I don't necessarily remember exactly which episode. And so if I want to go back and share it with a colleague or someone that I think would find it interesting, something like the ability to search through the exact, where's the topic? Where was it covered? It helps speed up that process and helps make it so much easier, rather than trying to sift back through lots of podcasts and just try to figure out the exact episode where I heard that and wanted to share that information. [00:08:46] Speaker B: Let's focus more now on the company data side of things, organizations leveraging the AI component of it. What are you sort of seeing in that space? [00:08:58] Speaker A: I see a lot of interest, a lot of organizations these days, they have tremendous amounts of data and they want to make use of it. They want to understand how they can take advantage of all of that data and continue to further the success of their business. And so it's a combination of looking at do I utilize public llms and things that have been trained on more broadly, publicly available data? Do I want to build my own or do I need to build my own with my internal company data and models off of that. So a lot of it is use case dependent on how they want to move forward and what they're trying to achieve. It is also risk based. And what is the risk appetite of the organization and concerns related to data protection or privacy and general issues, prompt injection and model poisoning and such, what their appetite is there and how much control they want through that. And then tied into all of that is the compute power that's sometimes necessary depending on the size of your data pool. And if it's significant, there's some significant computing power that you'll need to build those models and do that analysis. And then how do you augment all of that and looking at vector databases, being able to add those components and help further manage. So if you don't want to build your own model vector databases, of which elastic search platform is a vector database and allows you to have all of that retrieval augmentation. So you can pull something from a public, like a chat GPT OpenAI, and then you can augment that with more specific company relevant information, so you're able to take advantage of what's public and what's already out there and trained, and then you can add that data, add into that your company specific data, so it helps you be much more precise and much more applicable to your specific environment and use case. [00:10:55] Speaker B: So one of the use cases I want to focus in on now for a moment is, and you've heard it yourself, obviously, like employees uploading sensitive data into chat GPT in order to perform their job better and faster, which I 100% get, and that's the path we're going down around. It's not a replacement, it's going to be a tool. Perhaps people didn't think about what they were then doing when they're uploading sensitive information into GPT, for example. So my question would be, what's sort of your view on how to ensure people within companies are working within the realms of OpenAI, for example? Because at the end of the day, people aren't thinking necessarily like a security person, especially if you're just some finance folk who's just trying to do their job faster so they can get home on time, et cetera. And everyone's saying we should be leveraging AI to do our jobs better and more accurately, but how does that sort of then look? And I know it's still relatively early days, there are frameworks that are being developed at the moment, but there's still a lot of unanswered questions out there. So what can people sort of do today to make sure there is sort of guardrails around sensitive information being uploaded into chat GPT? [00:12:09] Speaker A: So I'll talk about this in the context of employees or users in general, utilizing public models like chat GPT to help, whether it's start a report or do some research. And a couple things tied to that. One, I don't see all the AI or gen AI security issues as anything that's new, and it's largely not that different from similar issues that we've had to, as security practitioners, deal with over the last 20 years. And the area that I most directly equate it to is I was at Black Hat conference, the big security conference that happens in Las Vegas each year. And the big topic that year was what was called Google hacking. And this was 15 years ago or more. And Google hacking was running specific searches and using data that was indexed into Google to identify sensitive company information system configurations, data that could be utilized to further research and plan an attack, if that was what you were looking to do. And I don't see today's Gen AI that much different than the Google hacking. And so it's looking at, with Google Hacking, it was controlling the index and what Google was indexing off of your site. And so with Gen AI, it's looking at what are the ways that we can manage the input in the, you know, with chat GPT, you can make the selection that the data you submit is not utilized in training. So it's making sure that that type of configuration is enabled. A big piece of it is awareness for your users and what the impact could be, how that information could be utilized. Because the key thing is you can block things like chat GPT in your corporate networks, but you can't control very easily what folks are doing on their personal tablets, mobile devices, their home laptops, home computers. It's very, very accessible. And the biggest component is awareness and education and helping everyone understand what could happen and why it's important and how that could affect them personally if something happens that builds on, and it impacts the overall success or ability for the company to continue with its business model. [00:14:35] Speaker B: So just going back to the awareness piece, I hear what you're saying. Do you think that there are people out there that just don't think, well, hey, I'm just, I don't know, arbitrary, I'm not in accounting or anything, or have never planned on wolverine that field. So perhaps I'm speaking out of turn here, but in terms of, like, got all this data, I'm uploading it to TaTGBT because, I don't know, I want to get the medium wage across our company, them, for example. You think people wouldn't think, hey, it's probably not a good idea, or am I just looking at purely focused on a security lens and that's what's in my DNA. And so maybe I'm not the right person to ask that question. I'm just always curious, though, do you think people are aware of what they're doing, but they think, hey, this is going to increase my productivity, so I just don't care and I'm just going to forfeit potential, the security side of it. [00:15:23] Speaker A: I do think there's an amount of unintended consequences. So individual pieces of data that are being searched on and utilized that go into training in and of themselves are not potentially an issue. But when you're working at the scale that these LLR are looking at, there's so much information available that you can't predict or sometimes even comprehend at the scale that some of this is working on, what may be in there and what connections could be made that you just can't anticipate. So that's where the best practice is to avoid having any company information utilized to train models. So some of those unintended consequences are not as readily available and successful for someone trying to search on different components. And then it's understanding more clearly for your organization, what are those key data points, whether it's the intellectual property for your organization, the sensitive data, the personal information, where that is stored, and to make sure that the controls and the protections and the understanding of the impact of using that type of information in anything that is not company specified to help build, continue to build that education and understand the impact. Yeah. [00:16:48] Speaker B: Okay. That's an interesting point. So then going back to your original point around, can't really control what people do on their home, laptops and phones and all that type of stuff. So what would be your recommendation? Because you're right. But that doesn't mean we solve the problem though. Telling someone, hey, don't do that on your laptop, for example, even explaining to them the repercussions, people are still going to do it regardless, for whatever reason that is. It could be unintentional, it could be, hey, I'm over this company, I don't care. So the motivations are varying, but I'm curious to know what does that process then look like? Because it's something that I think a lot of people out there are wanting to know in terms of, well, I can safeguard it from an internal perspective, but I can't control what Carissa Breen does on her laptop on Friday night. [00:17:36] Speaker A: Yeah, and I think that's one of the most fascinating aspects of all the conversations that are happening today. We don't have necessarily specific answers for that. There's a lot of discussion on what's copyright, what's created and what's potentially trade secrets. And I think there's going to be a significant regulatory and legal side of this that we're just starting to see come out that puts a little bit of accountability, or much more accountability on the organizations building the models to make sure what data that they're utilizing in a way to remove data from the models if it's found to violate copyright, to violate trade secrets, and that's going to be very, very interesting to watch over the next few years. [00:18:19] Speaker B: Would it be an example around, I'm in accounting, downloading a large file at a random time of the day, like Friday night, for example, like, no one's probably really doing that. That would then trigger something to say, oh, this Harissa Breen lady is appearing like a rogue employee, for example, because she's doing something which is kind of really in business hours, Friday night, large file you're downloading, what is she going to do with it next in terms of proactive measures? Is that going to probably be the easiest thing as of right now, to be able to decrease people potentially from getting that information, getting onto their personal laptop and then just going to chat GBT and then trying to increase their productivity, which in their mind, but that also poses a bigger security problem. Would you say that's going to be the easiest way, though, moving forward for companies? [00:19:09] Speaker A: Yes, and that's not any different. That specific risk with Gen AI isn't any different than what we need to worry about with the broader insider threat categories. So someone's downloading that file, they could email it using their personal account. If they're able to download it somewhere, personal device, or access personal email on their work device, they could put it in a share somewhere, whether it's public or sharing it with a competitor. So it's not just the gen AI piece of it that is an issue, it's the broader data protection issues, and we've seen a lot of work. There's the DSPM industry, data security posture management. We've seen a number of new technologies coming out, trying to improve on what we tried before with DLP, data loss prevention, and weren't overly successful in being able to take full advantage of that technology. But that, more broadly, is, from a company perspective, how to understand it and manage data. And for ourselves, we've only made that challenge greater. We have not put data in less places, we have put data in more places. We have such complex technology environments when you're looking across the use of hyperscalers and you're looking across the use of SaaS and the low cost of storage, so helpful in seed is good to contain and retain significant amounts of data. But then how do you avoid that data going into chat GBT, going to competitors, being shared publicly accidentally? And that's a broader security concern that we're still trying to tackle and find the best way to handle for ourselves. [00:20:55] Speaker B: I think you're right, especially on the insider threat point of view. And again, I don't expect you to have all the answers, just more so. Like, I think because like you said, no one really knows. It's just more so having these conversations. But then just going back a moment on the legal side of things, what does that then sort of look like? How do you see that sort of unfolding then like from Chris O'Brien takes a bunch of accounting information, she's on chat, GBT company finds out, I get prosecuted. What does that look? [00:21:22] Speaker A: That's. I'm not sure what it exactly looks like. I know right now there's a lot of focus on looking at the New York Times in the US is claiming copyright infringement on models being trained with their articles and their data without their permission. Sarah Silverman, the comic in the US, is also claiming copyright infringement. And so that's going to be, I think, the initial piece of tackling what is fed into the models and what approval needs to be gained before that can happen. What I do see potentially down the road is a mechanism when companies identify so similar today with Google and other websites, if we find something that is out there, whether it's domain impersonation, brand impersonation, trademark infringement, we have mechanisms to request takedowns and to remove that from whether the hosting provider or that site. And I anticipate there will be similar things in the chat GBT LLMs of the world. The ability to have a type of legal process to request that data is removed. The howls and the specifics of that, I think are all the things that will be figured out over the next handful of years. But I don't necessarily see us reinventing something at the moment. [00:22:44] Speaker B: So when you say figured out in the next few years, totally get it makes sense. But what do we sort of do now in this sort of weird time where it's like, okay, this thing is clearly here. Don't really have answers, trying to get the answers. People are still going to do the wrong thing regardless. What are some sort of, hate to say it, but like band Aid solution, people can sort of just start implementing today. And I know it's not an easy answer, it's just more. So do you have any insight then on that front of what people can start doing? [00:23:10] Speaker A: Mandy yeah, it goes back to a lot of the data protection components that we talked about. So having those types of controls, whether you have existing for your employees, web browser extensions that can mask data, can hide data, can disallow data, certain data going into website more broadly, controlling from a company perspective, where information can be accessed from, so not allowing services or sites to be accessed from personal devices, only be accessible from company owned devices, making sure that any vendor that you are working with that is interacting with an LLM. So if you're using some type of vector database to augment or any other data source to ensure that there's masking or that that data is somehow being anonymized, so you're still getting the value of the technology and the analytics, but your data is not specifically being fed into the public llms. Those would be the first two areas that I would really focus on. [00:24:16] Speaker B: And then in terms of maybe more broadly, what do you think is going to happen in the next sort of twelve months? Now, I know these questions are going to be answered, and like I mentioned earlier, there are, I think in the EU they've put together some sort of AI framework that I think people are referring to now to get ideas, et cetera. Still, some of them are baked, some of them are half baked. Would you say just with your, and your sort of role at the moment, say people are quite worried about this, or concerned because again, no one likes knowing that we've got a problem, you don't have an answer to it. No one likes feeling like that. [00:24:49] Speaker A: Yeah, there's definitely some significant amount of concern. And I was reading a book recently about the history of automobiles and was chuckling when I read about what was called the red flag rules. So at least in some parts of the world, when an automobile was on the road, when it was first coming out, not many on the road, still a lot of horses, that there had to be a person either walking or on a horse holding a red flag to let everyone know that this new automobile was coming behind and to watch out. And so if you go from there just to how our automobiles work today, largely computerized, we're up to self driving, the speeds of which that we drive on roads today. And I take that and use that as an analogy for AI in what we're doing. And I look at where we are today with Gen AI, as we're very cautious. We're not entirely sure where it's going to go. It looks kind of interesting. We are finding some good uses for it. We think there can be more, but we're not entirely sure where that's going to go. And I equate significantly to the kind of the automobile analogy. And so when we talk about concerns? Yes, there are concerns today, as there are with any new technology. And we need to be mindful and we need to be careful. But I also think there's significant opportunity in what we will be able to do. And we're at the point where data and the amount of data that we have and how we're using that data, it's at a scale far beyond what humans can comprehend and analyze for themselves. And with technologies like machine learning and Gen AI, we're going to be able to gain insights and make use of data in ways that we can't even anticipate today. And those are the things that really excite me. I've always been a lover of technology and how we can use that to improve and help ourselves and the world around us. And I think this is another iteration of that. [00:27:00] Speaker B: I'm definitely optimistic when it comes to this. I love AI. I love what it can do. Before we get into that, I'm just really curious now hear your thoughts on privacy. A lot of privacy people out there talking about, well, we have to maintain our privacy, but look, depends on who you ask. And I'm definitely not a privacy expert, but I've interviewed a lot of them. How do you think they feel about all the data and everything that's going on? And I mean, they're still pushing for all this privacy stuff. But as far as I'm concerned, if you're operating on the Internet, which is effectively most people, privacy is really hard thing to sort of maintain. Do you have a view on that? [00:27:36] Speaker A: Yeah, similar to you, I work with a lot of privacy professionals, have not focused directly on privacy. What I find potentially most challenging is the topic we touched on before, in that you can have discrete data points that by themselves or a couple of them together don't impact privacy. But you don't necessarily know what other data points, whether it's other organizations or other places in your own organization, are putting data in to where you could create a privacy issue by accident. For me, that's some of the key things that I see privacy folks really concerned about. Again, those unintended consequences. How do we ensure that data in an LLM just can't be used to track someone down that should not be identified? And I think those are some very interesting privacy cases and concerns that will be interesting to watch. [00:28:34] Speaker B: So I want to switch gears now and focus on the opportunities that are out there. AI has been more ubiquitous as it is today. I mean, I'm loving it. I think it's great. It's definitely helped me in terms of my workload, especially if I'm media, just learning the summary of something is really great for me to be able to read more articles perhaps, and just getting the key points. But from your perspective, what do you think the opportunities are? Because I'd like you to express them in detail, because perhaps this is the part going back to our first part of the conversation, around some of those outstanding questions that people have, because so many companies out there are telling me about all the bad side of AI, whether they're the right people to convey that or not. That's a separate matter. It's just more so you're seeing it from your view? I have definitely a view, but I'm really keen to see what it is in terms of opportunities that people can learn from that can actually change, perhaps their perception on AI and Gen AI. [00:29:27] Speaker A: As a CISO, I spend my days in security, so my answer will be heavily biased and focused on the security world. So for me, machine learning and AI technology has been used in security tools for a number of years. First started when we moved from signature based antivirus tools into more behavior based malware, antimalware tools. So that was the beginning of it, and it's been spreading across all different types of areas in the security field. What I'm really optimistic about with Genai and where that evolves is the broader context that you are able to gain. And by context, we have a lot of capabilities to understand. This user does these things and behaves this way, this system does these things and behaves this way. But it's much harder to understand all of the interactions between multiple users and multiple systems, and really understanding what behavior is what we typically see, and then what activities are anomalous and the things that we might want to investigate. And an example, a lot of the threats today are focused on finding valid user accounts and credentials. And so from most of our traditional security detection measures, this would look just like a user logging in, as they normally would. But what if suddenly they're starting to attempt to access a system that they've never tried to access before? We won't necessarily see that in a lot of today's security tooling and setup, but if you do that, you see they access maybe a system that they are allowed to access on a regular basis, but from there they suddenly then move elsewhere. That is a place, a production system that they shouldn't even be trying to access. All of that analytics and analysis and understanding is something that's very achievable now with the large language models and all of the kind of AI capabilities. And that's the piece that I'm really excited about, to give defenders that broader context and understanding of what's happening in their environment, to be able to much more quickly identify activity that is anomalous or just not standard behavior. [00:31:50] Speaker B: So it's going on the defender side of things for a moment. Do you think, as you would know, alert fatigue is a massive thing. Now, I'm not saying people should just sit back and not look at things more closely and let the AI do everything for you, because obviously there's anomalies with that and things not always being accurate, but do you think it gives people like a defender a bit of a break to be like, well, maybe it just helps. Arbitrary number, 10%. So I get a bit of breathing space because as we know, people are burnt out, they're tired, alert fatigue is real. Perhaps their concentration for doing these types of things are going down because they're so exhausted. So do you think AI is going to help with giving people a little bit of relief in their day to day jobs? [00:32:30] Speaker A: Oh, absolutely. And I think that's probably the largest use case in security today is that focus on the security operations and helping augment and support analysts. So whether that is utilizing Genai or AI assistants, elastic is one example. With the security AI assistant that's able to pull context from your environments. And you see this type of detection come through and pulling the information, different contextual information from your environment, whether that is information from your asset database, your CMDB, all those things that analysts complain about they have to do manually today. This is one way to start to pull all that context together automatically. But they're also able to pull in this type of event and this type of activity, bringing in the specific steps. This is the mitigation action that you should consider and being able to pull all of that through a combination of open llms with the likes of chat, GBC and OpenAI, your internal company technology stack and configuration. And it's really able to help your SoC analysts understand what actions they need to take versus right now, they spend a lot of their time gathering data and gathering information to try to get to that analysis. And once they get to that analysis point, well, I have to move on to the next alert. There's not a lot of focus or it then moves up to a level two analyst. And what I really like about the AI technology is it will allow analysts to spend their time on the critical thinking and the more high value human centric analysis and understanding of. All right, what's the user impact of this? How would this work in our environment? Do we need to take immediate action, but all of the data gathering and things that we spend a lot of our time today on would already be completed on our behalf. [00:34:30] Speaker B: Yeah, totally in agreement with you on the critical thinking side of it. I think that's something that is paramount importance that people should be focusing on. So in terms of anything discussed today, now, I know it's always hard to go into super amount of depth, but we go into depth enough looking at both sides. Where do you think we go from here as an industry as of today? And then we come back and we have the same conversation in a year. Where do you sort of see us as an industry and as a society moving towards? [00:34:54] Speaker A: I think there's a lot of debate right now on the speed of innovation and how do we do that in a mindful way, in an ethical way. And I think the conversations that are happening, I'm pleased to see they're not necessarily things that we saw in the early days of the Internet and the growth of social media. And we look at a lot of things that we're trying to address in that world. And I see those conversations starting with AI much sooner say, hey, this could happen, we need to understand how to protect against that. Or we're talking about the copyrights, we're talking about privacy. And I believe that at least at the moment, there will be a bit more practical approach in moving forward. On the flip side of that, certainly from a security perspective, threat actors aren't taking that same approach. They are quickly researching and understanding what they can do and how they can leverage. They have their own chat GPTs for the dark web, they have all of their tools to help with create phishing messages that are much more bespoke, much more targeted, no longer having all of the language and grammar and punctuation errors. And so it's going to be again, that balance of how do we move forward comfortably, safely as a society, but knowing that there will be parts of society that don't follow those rules, and how do we balance that? It's what we talk about for the Internet, what we talk about with any new technology. And I think that will continue to be the focus of the conversation for the next twelve to 18 months and continue to see that growth and that change and start to say, on top of that, I would like to start to see much more global conversations as well. There's a lot of country, region specific. I think this is something looking at more globally would help as well. [00:36:52] Speaker B: So, Mandy, do you have any sort of closing comments or final thoughts you'd like to leave our audience with today? [00:36:57] Speaker A: One. Thank you for having me. It's been a great chat, and I have been a technology lover from my early, early days, and I'm always fascinated by what capabilities we have. And I'm always fascinated by the creativity of how folks see how you can leverage and take advantage of technologies. I look at elasticsearch, created when Shai started to write, a tool for his wife to help manage recipes, and that's turned into a global, very large organization that's used by over half of the Fortune 500 and runs from the ocean to Mars. And I look at that and it's a fantastic world, and I'm really excited to see where it goes next from a use of technology perspective. While we balance the downside. This is KBcast, the voice of cyber. [00:38:04] Speaker B: Thanks for tuning in. For more industry leading news and thought provoking articles, visit KBI Media to get access today. [00:38:12] Speaker A: This episode is brought to you by Mercksec, your smarter route to security talent. Mercksec's executive search has helped enterprise organizations find the right people from around the world since 2012. Their on demand talent acquisition team helps startups and midsize businesses scale faster and more efficiently. Find out [email protected] today.

Other Episodes