[00:00:00] Speaker A: If an organization is taking the stance of, well, I'll just wait for a regulation to come about and then I'll do the bare minimum to become compliant to that regulation. I could not support that approach right now. It all is moving too quickly. It's moving too quickly at a scale that we haven't really seen in technology before. And you can't wait a year or two or three to start to make a decision about this. The decisions need to be made now. It's going to have to be more than the bare minimum.
[00:00:34] Speaker B: This is KBC. Are they completely silenced? As a primary target for ransomware campaigns.
[00:00:40] Speaker A: Security and testing and performance risk and.
[00:00:43] Speaker B: Compliance, we can actually automate them, take.
[00:00:45] Speaker A: That data and use it.
[00:00:49] Speaker B: Joining me back on the show today is Nathan Wensler, chief security strategist from Tenable. And today we're discussing the dangers of public sector employees using AI tools privately. So, Nathan, welcome back.
[00:01:00] Speaker A: Thank you for having me. So, so happy to be here.
[00:01:02] Speaker B: Okay, so I really want to talk about the dangers, because again, depends on who you ask, who you speak to, if you read different forums, social media, I mean, I trawl a lot of this stuff to do research and to sort of understand what people are sort of saying out there. But I want to hear your thoughts on the obvious dangers of public sector employees using AI tools.
[00:01:25] Speaker A: Yeah, for sure. And I appreciate you talking about all the sort of commentary and discussion going on out there, because I do feel that there's a lot of hype and a lot of misplaced fear when it comes to AI tools. And I think it's really important to talk about that as well, because the fear for things that aren't necessarily a problem distract us from the actual dangers and risks involved in using these tools. So to give you an example, I hear a lot of chatter about people being very concerned that attackers can compromise the algorithms, they can compromise the engine itself so that they hallucinate more, they give up false information, that kind of thing. Quite frankly, that's just not a real attack vector. I mean, it would be a very complicated sort of thing and a lot of effort for an attacker to break into an environment, get to an AI application, and then subtly manipulate the underlying Genai engine to make it spit out results. There's far easier ways that these tools can be compromised, and I think that's really where we have to be more mindful about it. As an example, we're concerned about trust. We're concerned about the tools providing users the right answers. That comes from the backend data, that comes from? What data sources are we using that the AI tools are leveraging to give us those answers? If those data sets get compromised, that's where you start to have real danger, and certainly much easier to compromise a database than it is to try to rewrite an application.
When we start to talk about the dangers of using these kinds of tools, what it really comes down to is where are the risks that could compromise the trust that users place in these tools to help them answer the questions they're asking or to perform the tasks they're trying to perform. So anywhere that we can find that those areas of trust can be compromised, that's where we have to really focus.
That's good data security. That's understanding the source of the data we're using for these tools. Are we using a public dataset that could probably be easily poisoned with a lot of bad information, or are we using a very small private data set that we can control? What data is there, what information is there, we can validate it, we can secure it, we can make sure the integrity of the data stays good. Those are the kinds of areas that we have to really focus on if we want to avoid the dangers from the applications themselves. So I think that's a big part of the equation that we get fixated on the front end of it, but it's really more of a data security kind of issue. The other obvious danger then is just misuse. And we've already seen some instances of this in the private sector, but a lot of concern from government agencies as well, especially when we get into military, energy, national security areas where there's very, very sensitive and classified information. A lot of concern that just user error, government employees might accidentally take private information and plug it into one of the public AI tools. And now that data is out in the wild and other people could potentially access it, there's a lot of concern there as well about just user training. Are we educating our users about the proper use of these tools? When is it okay to use them? When is it nothing? Okay? And really trying to put in good controls around the safe use of approved AI tools, while ideally trying to make sure that our users are circumventing all of that and going out to public datasets with really, really crucial and critical information. So there's a lot of moving parts to the dangers of all these things. AI. I think it's just, again, really important to focus in on the real problems and not get too caught up in the, the hype of these very scary sounding algorithm compromises or other sort of almost movie level kinds of adventurous hacking that people are afraid is going to happen when there's far more practical dangers to be concerned about.
[00:05:42] Speaker B: Okay, I want to get into this a bit more now, you raised something which is interesting around the right answers. So I'll give you some context and then I'll ask the question. So basically, I got back from Las Vegas, went to a conference, interviewed the head of AI. Guy was brilliant. And then he talks a little bit more about what you were saying around the right answers. Right? So just say hypothetically, okay, the sky is obviously blue, but what if, like, we write all this content out there that said, you know, the sky is pink, and then people use, you know, that data set to then say, well, you know, if I then typed in a chat GBC, well, what colors the sky? And it came up with being because there's all this inaccurate information that's out there or propaganda that exists, as you know, what does that then mean? Because, and the reason I asked that is, one, journalism's decreasing, right? Two, things are being written by AI all the time, whether it's accurate or not accurate. How do you then look at that more closely to say, hey, that's an accurate piece of content versus not, right? Like, it could be, in my crazy head that maybe I genuinely think the sky's pink. Right? So how does that then translate into, these are, these are right answers? Who's going to be auditing that stuff?
[00:06:49] Speaker A: I mean, it's a great question. And that's exactly the kind of danger that I was alluding to about the data side of it. Data poisoning or filling a data set with a lot of bad information is, in my opinion, the greatest danger when we're leveraging these kinds of tools. Because again, it's an attack on trust. If I'm an organization, I take a lot of really good steps to put in a genai knowledge base kind of thing for my users. If you have a question about what to do, go to the little chatbot, type in the answer of the genai stuff, can easily explain to you what you should do. Well, if my backend database is compromised and filled with a bunch of bad data, think about what's gone on. I've told my employees, here's an application you can trust. When it gives you an answer, use the answer. But then when the data is compromised and those answers are wrong, what's the user left with? The user has been told to trust it. They believe that the information is good. And so you've already set people into a place where they're not necessarily going to question it or validate the information as sort of a first response. Their first response will be to just sort of trust it, even if it might seem a little bit off. So it's a real problem for organizations that have taken the step to say we're not going to leverage public datasets because we know that information or that data is just crazy. There's lots of comments about what color the sky is and it's very difficult to understand what's real and what's not. So we're going to have an internal controlled AI setup and we're going to have it be very useful to our users. But if that controlled data set gets poisoned, now you're essentially attacking the trust of the users and it's so much easier then to cause them to make mistakes, to cause them to click on malicious links, to give up their usernames and passwords. There's just so many scenarios where an attacker can manipulate that trust because of the data poisoning that becomes a much more effective sort of an attack than, than almost anything else. So this is the big thing that organizations have to take into account, is how do we ensure whatever the backend database is stays correct, that the data is found, that we're not compromising the integrity of that data, so that we know that when our users ask questions from it, that the Genai tools are giving them legitimate responses. That's really the goal we have to work towards. And I think a lot of organizations are still trying to figure that out, frankly.
[00:09:35] Speaker B: Well, it's just going to ask how do you do that effectively and how do you know it's going to get to a point, right? Okay, back, look at, before the Internet started up, before Google was around, you had to go into Cyclopedia and all these things, right? Kids say, wouldn't even know what that is. Now if I know something, I just google it. Now, you know, sure that there could be some, maybe some of the, you know, the, the answers are wrong or incorrect or whatever. But then I'm gonna get to this stage. It's like if I can just ask, I don't know, chat GBT or, you know, large language model, whatever it is a question and it gives me an answer, am I really gonna question it? Like, it's more about the mindset. Like why would someone in the next generation sit there and question it? Because this is all that they know, right? Not back through the day where we're looking through encyclopedias and then there's all these references and all that, all that's gone, gone out the window. So how do you ensure it's accurate, though? How do you maintain that integrity of that data?
[00:10:32] Speaker A: Well, I think what we're going to start to see in a lot of cases is an acknowledgement that what we saw initially with things like chat GPT, where it's, the promise was of the information that's out there on the Internet, is within scope of this database. And you can ask it any question and it'll answer any question. Oh, we've seen. We've seen exactly what kind of chaos happens to that, the hallucinations, the crazy information, the crazy answers. It's become, I think, pretty common knowledge that you can't just trust whatever comes out of those public datasets, because people have seen firsthand that the answers are pretty wild, or can be at least. So I think what we're going to see is this trend towards not trying to leverage AI tools to be the answer to all questions, but to leverage very purpose built, specific AI tools that can answer particular questions. When you do that, you're going to be able to create a data set that's much more focused, that's more aligned with a single purpose or a single topic, and that will become much easier to manage from both an integrity standpoint and also just a verification standpoint. An example that I often talk to people about is if you think about just a help desk, a knowledge base for a help desk. Users always have questions they want to know how do I change my password? Or how do I request a new piece of software? These are all very common help desk type questions. You could build a data set to answer those questions without answering things like what color is the sky, or how many fish are in the sea. A helpdesk dataset would be much smaller. It's going to be a much more finite set of answers instead of data. That becomes a lot easier for organizations to ensure that the information that's there is accurate, it's aligned with the purpose. And then when users are leveraging those LLMs or the equivalent chat GPT interface, they're going to have a much stronger stance to be able to trust what comes out of it, because the data is very focused into just the thing they're trying to solve. And that's, I think for especially government organizations, I think that's where you're going to see the biggest shift in how they use these kinds of tools is to do this purpose driven knowledge bases or themes or topics so that they can manage the accuracy and the validity of the data better.
[00:13:06] Speaker B: Okay, so just to press on a bit more, you said purpose built. To answer particular question, can you give me an example? Like how particular or specific are we talking here?
[00:13:15] Speaker A: Well, it's going to be as particular or specific as the data behind it. So I mean, I can give you an example which we do this a little bit attenible, quite frankly, but we look at vulnerability data. Vulnerability data can be really complex and technical. We talk about why is a particular piece of software vulnerable, or how is the vulnerability exploited, or how do you remediate the problem. There's a lot of data associated with vulnerabilities. And if you were to build a dataset that just contains the information about vulnerabilities and essentially nothing else, you could build a genai interface on top of that. That would look at all that data, it would look for patterns, look for commonalities. It's going to start to really parse out that data set about vulnerability information to understand a little bit of what's there. So that if a user needs help, I'm a sock analyst now, I need to figure out how do I fix this vulnerability that one of my tools told me about. You could just ask the tool that how do I fix this? And hit enter and it's going to be able to tell you, oh, here's all the remediation advice about that. And if you're not clear about what that means, you could ask it further. As long as that data about all of the various pieces of the vulnerability are there, the user is going to have a very quick and powerful way to get that information out in an easy to digest way, a very fast way, so that they can start to address the problem. So I think there's a lot of use cases in this regard of where we could build, especially for security practitioners build these kinds of purpose built interfaces to do threat research, to do asset research, to understand what's connected in our networks and be able to ask more questions about who owns this server or who's logged into this. There's just a lot of ways that can be done, but that's the kind of specificity of the data that we can leverage. It's not going to answer the broader questions we were talking about, like the color of the sky, but it can help us with that investigative need that a lot of security practitioners have. So that's the kind of thing you're going to see, I think, more and more commonly across different areas of study, across very specific security specializations. That's where we can see some wins with these kinds of tools.
[00:15:41] Speaker B: So do these tools sort of exist today? Because like when I was doing this sort of stuff and telling, I mean, this was going back like a decade ago, I didn't have any of this. And I used to pull together reports, so it would be nice if I could have leveraged some type of, you know, AI capability or else I was doing all this stuff manually. Right. Like that was time consuming and hard and you miss things. So are we going to start to see this more? And I know people are going to say, oh, AI's taking my job. Well, not really, because you're going to be able to do better things that are like, you know, better outputs for companies. Right. Like, I would rather AI do real low level stuff and then pay that person to do more, you know, critical thinking tasks and strategic tasks. Right. So what do you think now moving forward as of today? Like what, what do you see with your role customers that you're talking to? Where are we going to get to with all of this?
[00:16:29] Speaker A: Well, first of all, let me just address, fully agree with you, by the way, that this is, the AI is not replacing anyone's job. Like that is. I know there's a lot of fear and hype in the industry about that. It's just not real. It's exactly as you say. We need people who can make really sound risk decisions, get organizations moving to mitigate and remediate these problems before breaches happen. That's the whole point in cybersecurity, is to harden the environment so that we experience fewer data breaches. And if these tools can help you make that decision faster, you're actually more empowered and more valuable in the organization than when you're doing it manually and spending hours or days sifting through spreadsheets trying to figure it all out. So I think its a really important point youve brought up that this is not a job replacement function. This is a skills augmentation. This is going to make us better at what were actually here to do, which is mitigate risk and not just sorting through data and spreadsheets. Its really, really critical that people understand that. And I think going forward, I think thats really what youre going to see more organizations start to focus on when we cut through the hype and the buzzwords that are used out there. And there are a lot, let's be fair, there's a lot of companies that have tacked on the letters AI to every single thing that they do. And so it can seem like there's just a lot of fraud essentially of what AI can or can't do. When we cut through that and you start to see it as this kind of analysis tool, as this way to expedite how we get information, how we absorb information, that's the power, that's what you're going to find. The benefits to your organization. You do get to make those risk decisions faster and more accurately. And so as more tools integrate these kinds of functions into them, as we educate users that this isn't the all encompassing apocalypse of security as we know it, it's really just another tool in the toolbox that can help you do your job better, I think you're going to see a lot bigger return from the implementation of these kinds of tools, or in a lot of cases, vendors. Again, my own company, we leverage some of these kinds of capabilities already in our tools specifically to help people ask those kinds of questions or to query the data set. You're definitely seeing already more and more companies are incorporating it in a very practical way. And as that gets adopted through more organizations, the benefits are going to become much more obvious.
[00:19:11] Speaker B: Okay, I want to switch gears now and talk about regulation and governments sort of taking that more into their realm. Now I'm going to ask a very harder question because like governments in the past are not great at doing like the best of things, right? So now we're asking them to do something that's pretty complex to be like, okay, you guys should just regulate it. So how does that work? Because again, I get it. Like, it shouldn't be on the onus for like, you know, each, you can't regulate things unless it's, you know, back by the government, etc. Or independent bodies. I get that. But that's not going to probably happen anytime soon. And I would challenge the capability and the competency of that being a thing. So I'm really curious near, and I mean, I'm only looking at for what, what I'm, you know, the research that I do, the reconnaissance that I see from what people are saying on Twitter or X or whatever you want to call it. I'm looking at what people in the market are sort of saying and to our earlier, you know, thoughts around the commentary that's out there. So how does this work?
[00:20:13] Speaker A: It's a really good question and I think the honest short answer is we don't really know yet regulation when it comes to the use of software or security tools, anything like this. And we look, there's been a lot over the last 20 years, right? A number of audit requirements, compliance requirements in healthcare and finance. And government has stepped in a number of places to establish requirements for basic cyber hygiene, for basic security matters to ensure that these industries or these areas are essentially safe. But I think one of the challenges and the tricks here is that we have to sometimes think about why government steps in in these cases. What I see in a lot of cases in the public is people say, well, you know, we don't need government oversight, we don't need all these regulations. Just let the market decide. Like companies should recognize that cybersecurity is a good business practice and if they don't invest in that, then they get data breaches, they get compromised users and customers will move on to other companies and they'll lose their business and they'll be out of business because nobody wants to do business with someone whos not secure. Well, in practice, has that actually happened? I mean, any one of the major breaches weve seen over the last couple of decades, those companies are still in business and in many cases their stock prices are higher than ever. So relying on organizations to just do the right thing because its the right thing to do hasnt really panned out. We just havent seen that broadly. And it has forced the issue in a lot of cases for governments to have to step in and say, well, listen, people's information is being compromised. Individuals are losing money. They're being attacked by these folks. They're having their money stolen. We have to step in and protect individuals because the market hasn't really gone as far as we had hoped. So it's a two sided problem and I think each side likes to blame the other a little bit. Companies dont like being told what to do or having mandates. And government is, as you mentioned, not always the greatest when it comes to defining those things, but neither side is really getting it right. So we have a lot of work to do. I think its part of the reality of this sort of thing. Were going to need some amount of regulation when it comes to AI usage and especially when it comes to trust and validation for public data sets. And we're already seeing some of the major players, Google, Microsoft, the OpenAI, all these things. There's starting to be a lot of questions about how are you protecting people's personal information? How are you ensuring that data can't be stolen or manipulated. Those are the right questions to be asking and it may require a regulation.
Maybe this may also be a place where we just essentially need to step up. These organizations need to step up and do the right thing. And implement really, really strong controls to protect their users and their customers. So I think you're going to see both. I think you're going to see in the AI space, you're going to see regulations come about. I know a number of countries are discussing that right now and trying to figure out what they can legislate and what they can require internally. A lot of government agencies are, of course, creating policies that their users can't or should not ever use. Public facing genai services like chat, GBT and all the rest, a lot of what happening all at once. But I don't think there's a single right answer in this particular case. And how it evolves in a lot of ways is going to depend on the companies behind these big Genai platforms. How they take the next steps in their security practices, I think is going to dictate a lot of how much the government's going to have to regulate and how much they're going to have to require, what depth they're going to have to require to ensure the trust of these systems. It's really complicated, so it's a hard question to answer, but how long do.
[00:24:19] Speaker B: You think this will take until it's implemented? And this is a thing because you can't really, it's not so easy to just ring fence this problem, right? Cause it's everywhere. So how do you do that effectively? There's no rulebook to like, oh, this is how we're going to do it. So this could take years.
[00:24:35] Speaker A: Yes, it absolutely could. Especially if we're going to rely on regulation to be the answer for that. So this is where it becomes imperatives that essentially individual organizations need to step up and do more for themselves. They need to look at the real risks that these platforms can cause. They need to look at the way that it could affect their businesses. If, if users are chipping off intellectual property into these databases, if users are making corporate decisions based on bad or misinformation or government agencies or compromising secrets because we've copied and pasted it into an interface somewhere, there's a lot of places where this can go wrong. And if an organization is taking the stance of, well, I'll just wait for a regulation to come about and then I'll do the bare minimum to become compliant to that regulation, well, I can't, I could not support that approach right now. It all is moving too quickly, as you've pointed out correctly, it's moving too quickly at a scale that we haven't really seen in technology before. And you can't, you can't wait a year or two or three to start to make a decision about this? The decisions need to be made now. It's going to have to be more than the bare minimum. We really have to get better about driving strong policy about what can or can't be used within organizations, ensuring proper security controls around the things we do use. So that's application access, that's data access, all of your standard access control kinds of things. You need validation processes for the data to ensure that what your users are getting is real. There's a lot of work to be done here and I don't think it's necessarily unknown, it's just that we have to start doing it. That's I think the challenge for a lot of organizations is they're just not yet doing what they need to do to protect themselves.
[00:26:33] Speaker B: So you mentioned before, companies need to be doing more. What do you think they should be doing though, in terms of like, no one wants to have more work on their plate, right? Like it's like we got enough things to do, Nathan. We're trying to keep our head above the water. We're trying to do real basic stuff like patch management, right. And now we got to think about these other things.
[00:26:50] Speaker A: This is the core of risk management. How much risk is introduced by allowing unfettered use of genai systems? That's not a question that's going to be answered the same way by any two organizations, but that is the question you have to answer. Yes, we all have a lot of work to do, but from a security standpoint, we're here to help advise the business about where to focus on the areas that put us most at risk. And if youre an organization that deals a lot of critical data, protected personal information or healthcare information, intellectual property, classified information, whatever it happens to be, this can introduce a lot of risks to your organization, maybe more than some of the other things youre already working on. It is something that has to be answered. Its going to be answered a little bit differently by everyone. And I think the other thing to remember, much like I just said a moment ago, these tools are essentially just applications. Like most any other application, there's a front end user interface, there's a backend database.
In terms of what to do, we already know good application security is really your first major step here. Make sure the user interface can only be accessed by authorized people. Make sure the database can only be accessed by authorized people. These kinds of controls are fundamental to any application, including an AI based application might be some extra steps for the data validation. There might be some extra steps in terms of moving away from a public based data set and leveraging your own internal, well controlled set of data. That's a little bit of work there, too. But I don't see this as really all that different than any other application that we secure in our organizations. So, yeah, it's got to be factored in like any other risk factor. And if it puts your organization at risk, you've got to find the resources to put good controls in place.
[00:28:50] Speaker B: So just following that train of thought, what do you sort of see as the largest risk to sort of government agencies? More specifically?
[00:28:57] Speaker A: I think the real risk is it is the use of sort of the public data sets, the chat GPTs and bar, and all of the other sort of public facing things out there. When government agencies have protected and classified kind of information that a user who makes an error, they copy and paste information out into chat GPT that they shouldn't do, that can put a lot of people at risk, depending on the agency and the services. So I see for government, the bigger concern is that human error, if you want to call it insider threat, it's a little bit of that. It's not necessarily malicious insider threat, but it is the use of those public datasets to either have information from your secured areas get out, or bad information coming in that feeds the users and causes them to make really bad decisions, which, depending on what level of government you're in, could be very, very harmful to a large group of constituents. So having good policies in place around that, it's training and education is there. It's a lot of monitoring to make sure that your users aren't circumventing what controls you have and using these public things. And frankly, you have to be prepared for it to happen anyway. If there's anything we've learned over the last 20 or 30 years or so, is that when there's some kind of new, shiny technology out there, users will find a way they use it. The reality is we can put a lot of controls in place, and we have to, but we still have to be prepared for people to use chat GPT from their phones or leverage it from their own personal systems at home or whatever the case might be. So theres a lot more work that has to be done in terms of monitoring and ensuring that everyone understands the risks, especially around protected data. But thats going to have to be the focus for government agencies, I think, as a first and foremost kinds of thing at this point.
[00:30:54] Speaker B: So even if people understand the risk, right? Like, do you think people care? Now I say that, and what I mean by that is, if I could reduce the time, like, if I'm putting myself, obviously, I'm an entrepreneur, it's a little different. But if I'm putting myself in someone else's shoes, if I could reduce the time of doing my work, because I can use chat GBT to do it for me, and maybe, I don't know, it reduces 20% of my workload, for example, aren't I more inclined to do that irrespective of the risk? Because I think that's people's mindset.
[00:31:20] Speaker A: It very much is. And in a lot of cases, it might be appropriate to do that. But if we're talking about government, especially, there's so many areas where you're dealing with very sensitive data sets. You're dealing with, again, military, classified information, financial information, healthcare. There's so many areas of government that deal with really, really sensitive information.
So that does become a question for the organization, essentially, of, is the efficiency gain that you might see for your users worth the risk of exposing all that data if somebody makes a mistake? And I suspect in those areas where the data is more secretive or classified, that answer is going to be no, because the risk is too great. So this is going to have to be an exercise that a lot of organizations go through. And this is part of the education process. You may have to help your users understand that. Like, look, you work for defense. We just cannot risk any of these things getting out. And so, no, these tools won't be used, or they will only be used in this context or whatever the policy decision is. That may have to be the decision in some of these organizations in order to ensure that the data isn't compromised or escapes the organization and is able to be sort of stolen, if you will, from these public data sets. So it's gonna, there's no single answer here. I mean, every organization is gonna have to make sort of a different call about this and be mindful of the fact that users, they do make mistakes, or they sometimes have good intentions. Those good intentions can lead to really problematic security incidents, and we've got to be prepared to deal with that as well. But that's, that's part of the whole process we're wrestling with right now.
[00:33:06] Speaker B: And you're absolutely right. Makes sense, right? If you're dealing with sensitive information, you'll think twice about it. But maybe that's because of just the pedigree that you and I have come from and other security people out there. But you think about the average person like Mildred is like doing a very, you know, a job that maybe doesn't come from our sort of background, has it? Maybe wasn't our intention, just didn't really think through it right. Didn't, didn't, didn't think this could be a problem. And until it is, right, like again, maybe the intention was to do the right thing and it didn't quite turn out that way. And then there's a problem that's often where you and I have both seen things have gone wrong. It's not like someone's intentionally being like, haha. Like I'm going to purposely, you know, ruin it for everyone. It just may have been a mistake. They didn't think it through.
[00:33:48] Speaker A: Well, that's exactly what I said, is that, you know, it's still a form of insider threat. It's just not a malicious insider threat. Accidents are legitimate risks, accidents are legit, can cause legitimate security incidents. And I mean, this is something that's a not new to AI tools. Let's take a step back here for a bit. Let's think about when wireless access was pretty new and such a wow sort of thing. A lot of organizations did not implement any form of wireless network because they were concerned about the security risks and the harm that it could cause and what ended up happening. Users wanted to use wireless. It was cool, right? So they bring their own access points from home and plug them into corporate networks and configure it to just broadcast wireless access in a totally unsecured way directly into a corporate network. Call it, shadow it, call it whatever, you know, whatever term we want to do. But this is a problem that's been going on for a long time, people.
Even in that example of wireless, it was done with good intention. Right? I can use my laptop from a different office because it's wireless. I could use it somewhere down the hall. I don't have to be tied to my desk. I can do more work in more places. Yeah, that's all good intention, but it was a big security risk. We're essentially talking about the same thing here. People have really good intentions about, hey, hey, I can write this email for me. I'll just copy and paste all this really secret data into it, and it'll write the email that I have to send internally back. Yeah, your intention is good. It saves you a lot of time, but you're putting the organization at risk. So what's happening or the concern around this isn't new. What is new? Is the scale and speed and scope of all of this, because AI has become kind of ubiquitous. It's accessible everywhere. I can get it from my phone, I can get it from home, I can get it from anywhere. So it is a little bit of a more complicated problem to try to manage just because of the accessibility. Fundamentally, we're talking about security, incident response and how you're prepared for when those accidents happen. What do you do about it? How do you recover? How do you isolate the damage? How do you deal with the user in question? Do you have a way to handle that side of it? From an HR perspective? A lot of different factors, but no different than any other accidental insider threat that we've been dealing with for decades now.
[00:36:21] Speaker B: So that leads to my next point around. Obviously we want to encourage innovation and AI, and we've sort of discussed the benefits of it within reason, to the point where you're not jeopardizing the company and putting things at risk. So how do you find the balance between that, though? Because of course we want people to do more meaningful tasks and critical thinking, strategic tasks. If we can eliminate or automate using AI, etcetera, to do some of the, the more menial tasks. But then obviously you don't want to push it to the point where it's like, oh, I just exposed a whole bunch of trade secrets, I shouldn't have done that. How do you find the equilibrium?
[00:36:56] Speaker A: Well, I think it's, again, going back we talked about a little bit earlier, this is the place where moving away from leveraging public broad data sets like a chat GPT inside a corporate environment, moving away from that and moving to smaller, controlled internal datasets only, and having that purpose built, tooling for a particular task or for a particular function, that's going to be the way where we can balance the security needs. You can ensure that if I have an internal tool and the database is all internal to me, and it's not available or accessible to the public Internet or any other user out there, it's not great if my user copies and pastes classified information into it, but at least I know that data and everything in it is still contained within my environment. I still have some amount of control there. So the movement, I think what we're going to see in the next couple of years, roughly, is multiple smaller implementations of focused AI usage that is well controlled, like we control any other application. We'll move away from leveraging the public datasets and the public tools for corporate benefits. Well, use these private controlled versions of it for our corporate work or for our government work so that we can ensure that all the security controls we need in place can be there. And I think that's going to be the way that everyone's going to have to manage this going forward.
[00:38:25] Speaker B: So where do we go from here, Nathan? How do you think we move forward? Because as we've sort of spoken about a lot, the undertone of this, this interview is it's hard. We don't have all the answers. You know, we're trying, could take some time, but what do you see, like practically next steps?
[00:38:40] Speaker A: I think that's already happening to some extent. I think that people are realizing the dangers and they're starting to ask the questions about how to best secure these things. And the way to that, again, is these, these smaller, sort of focused, purpose driven tooling. So some of that may come from your vendors and partners that you're already using. They may be leveraging it in that way. Some of it may be internally developed. If an organization is large enough and they have a very particular need and a very specialized data set, they might leverage a genai engine and set of tooling to build something internally. But I think that's really the next steps here. The next steps are understanding where it can help in that very practical kind of way and then looking at the best ways to implement that, whether that's through a third party, a trusted vendor or a partner, or building it yourself. Those are sort of the next steps here in terms of how are you going to leverage it going forward like you would for any other business tool or security product, look at the need, look at the place where it can help you most and then implement accordingly from there.
And like I said, from organizations I talk to all over the world, that's already happening. We're already seeing folks training and educating their users away from the public, facing things or putting policies in place that say within this organization, we don't use those tools. And they're starting to leverage more things internally because of the security controls that they can put around it. So we're already heading down the right road. It's just going to have to continue to play out for the next bit of time while everyone implements embraces that.
[00:40:22] Speaker B: So, Nathan, do you have any sort of closing comments or final thoughts you'd like to leave our audience with today?
[00:40:27] Speaker A: Yeah, I honestly think the most important thing is it's really important right now for everyone to cut through the hype. And we've heard so many horror stories about how AI is going to allow attackers to be unstoppable. We've heard AI is going to replace every security practitioner on the planet and it'll just do our jobs for us. There's a lot of buzzwords and rhetoric and hype around it, and it's really, really important that people cut through that and start to see that this is a tool like any other tool. It can be very powerful when it's implemented correctly. And let's talk about the best ways we can leverage this particular tool in our toolbox to be more efficient, to better understand risk, to understand my inventory of my asset space better, whatever the need is. I think the more that people to a practical place about this, the easier it'll be to understand the problem, the easier it will be to put the right security controls in place and the better position they're going to be to actually get the benefits from it. Without panicking and just being the sort of constant state of putting out of fire around it, we can get to a place where it actually is beneficial and does a lot of good for our organizations.
[00:41:52] Speaker B: This is KBCast, the voice of cyber. Thanks for tuning in. For more industry leading news and thought provoking articles, visit KBI Media to get access today.
This episode is brought to you by Mercsec, your smarter route to security talent. Mercsec's executive search has helped enterprise organizations find the right people from around the world since 2012. Their on demand talent acquisition team helps startups and mid sized businesses scale faster and more efficiently. Find out
[email protected] today.