[00:00:00] Speaker A: We've got to start getting frameworks in place. We've got to start getting the right policies in place. We have to start getting the right control structures in place. Right now. There's no audit framework in place that says, hey, this is how you assess the security and the privacy of an artificial intelligence system. So getting governance is going to be absolutely key in this case. It absolutely starts from the top.
[00:00:24] Speaker B: You.
[00:00:27] Speaker A: This is KVCAM as a primary target for ransomware campaigns, security and testing and performance risk and compliance.
[00:00:36] Speaker B: We can actually automate that, take that.
[00:00:38] Speaker A: Data and use it.
[00:00:41] Speaker B: Joining me today is Janai Marinkovic, executive director and chairman of the board from Gracie. Janae is also a member of Asaka Emerging Trends group. And today we're discussing Isaka's latest report, the promise and peril of the AI revolution. So, Janae, lovely to have you back on the show after so long. I think you were episode 70. There's 220 out there now in the wild. So you were kind of a while ago, but of course I remembered you. So I'm excited to have you back on the show. So welcome.
[00:01:10] Speaker A: Thank you so much. I'm really honored to be here and excited.
[00:01:14] Speaker B: Okay, so let's start with the report. Now, even the title does sound interesting, the promise and the peril of the AI revolution. And this is really interesting because there's a lot of conversation going on about AI. So it's wonderful to have you talk through it. So let's start maybe with the view on the potential risks that enterprises could face with generative AI.
[00:01:40] Speaker A: Sure. So when we start looking at the report, and I really do invite everybody to read this white paper, it's a good solid read, but it summarizes the different risks in about ten different areas. You have everything from societal risk, and that's understanding how things like generative AI operates in the context like social contexts, and how these social contexts can lead towards things like amplification of bias or skewed data representations and so forth. And we've actually seen this manifest itself in terms of areas like public discourse, as an example, a lot of the AI driven echo chambers that are created around political dynamics. So you're seeing that in terms of deepfakes is a great example. Another big risk that is highlighted is around IP leakage and invalidation. And essentially, and we've all experienced this, right, or at least seen it, where a lot of data and intellectual property is making its way into these systems. And so it's leaking a lot of trade secrets, patented methodologies and so forth, which leads itself, also to things like invalid ownership. And that's who owns the data, who ultimately, at the end of the day, owns a lot of these AI generated outputs. And we're seeing it change and challenge things like existing legal frameworks. So IP is a really big issue, cybersecurity and resilience. And so that is that we're seeing attackers and threat agents adopt AI into their new attack vectors, and so the models themselves are susceptible to adversarial attacks as well as AI is being used to execute these attacks. We have weak internal permission structures. And so that's, how do you ensure that just traditional identity and access management, how do you make sure that only authorized people are ultimately able to access the key parts of the model, the key parts of the system? We're almost there. Scale gaps. And so we've certainly seen a big disparity around AI literacy within side organizations, especially inside cybersecurity and GRC organizations. We see things like overreactions. And so because public misunderstanding can lead to things like disproportionate regulatory responses and societal responses. And so what ends up happening is you may end up putting overly stringent requirements in terms of AI governance frameworks, and so those reactionary policies can do things like stymy innovation intended and unintended use. So again, there's a wide variety of behaviors that machine learning and artificially intelligent systems can exhibit. And so depending on not just the training data, but the way that the data itself was collected, based on a lot of things, you can end up seeing that the models themselves can tend towards bias, as an example, data integrity, making sure that the data that underpins these models is accurate, and then finally is liability when you're dealing with these systems, who actually is responsible legally when it comes to things like operator error or algorithmic failure or shortcomings in the model itself. And so those are the ten major areas that are covered in the report.
[00:05:01] Speaker B: Wow. Really appreciate your comprehensive response. I want to ask a tough question, just relating to the societal response. Would you say, Janae, that people are dramatizing generative AI?
[00:05:16] Speaker A: And so when you say dramatizing, I want to make sure I fully answer the question, meaning that we are exaggerating the issues with it, most definitely.
[00:05:27] Speaker B: Like we're going to turn into terminator land type, know a lot of people going around with these statements.
[00:05:33] Speaker A: Sure. And I think that's a fair question. I think that AI has the potential to be abused in a very significant way. Do I believe that we're going to end up with terminators no, although you do certainly need to look at modern warfare. But the deal is that these algorithms underpin life safety systems, these algorithms underpin parts of our critical infrastructure, and they certainly can be abused. And so I think that, I know that there are waves of attacks for which we're not prepared. So whether or not that ends with the human species versus us having to learn how to adapt and deal with some pretty significant outages and issues, that's certainly going to happen.
[00:06:16] Speaker B: Yeah, I hear your point. Wouldn't you say though, as well, that anything in life, if you look at it, could absolutely be abused? So, for example, going out in the sun is great, but if you go out in the sun for too long, you could have skin cancer, et cetera. So, of course, if you're abusing going out in the sun, there's ramifications. So do you think that people have jumped on this too quickly? Because again, you mentioned before, of course, that's now been the call to putting regulation and governance frameworks. But then to your point before, you mentioned that, well, with those things perhaps could stunt innovation and things like that. So do you think people have reacted too suddenly to this? And I know that you and I are more technologists at heart, so maybe our view is a bit skewed in favor to the technology side of things. But if you were to zoom out, do you think people have just responded in a knee jerk reaction?
[00:07:07] Speaker A: If the question is, did we adopt the technologies too quickly? There was no way to slow this down. Once this became especially large language models, as an example, once this became a tool that was usable by the masses, once this began to be able to automate a lot of the knowledge acquisition. No, there is no way to slow this down. I think the slowness is on corporate environments, companies, businesses, not necessarily very rapidly adapting some of the processes and practices that they should put in place around AI, such as an AI acceptable use policy, there's key parts that you need to have in place around the AI when you bring it into the environment so that it's more controlled. To me, that's where you start to see the issues. Not necessarily that people adopted it too quickly, that was going to happen.
[00:08:00] Speaker B: Yeah. So probably more my question was around the response. So, again, from my understanding, even when Chat GPT, for example, was launched, I'm pretty sure Italy banned it pretty rapidly. And of course, companies now, no, you know, we're going to put these policies in place. Do you think people have reacted natively towards chat, GBT, generative AI, et cetera, or do you think that's fair in terms of what was released? And, of course, people are reacting in accordance to where our world's moving towards.
[00:08:32] Speaker A: Well, that's a fair question. I want to delineate between what happened when it first came out versus where we are today. When it first came out, there was that overreaction of shutting it down, preventing access. And in some of those cases, it wasn't necessarily an overreaction, because those companies that did shut it down experienced intellectual property loss.
But now I'm seeing it open more and more. As an example, I'm seeing more organizations allowing their users to be able to use bard. I'm seeing more and more organizations encouraging and incentivizing people to use chat to know. So I would say at the beginning, the response may have been one way, but I'm certainly seeing that open up now.
[00:09:12] Speaker B: Yeah, sure. And it's like with anything, things take a little bit of time for people to get used to. So let's move on now. And for the record, and I didn't mention this at the start, so I apologize, we will be linking a copy of the report in the show notes. So if you want to do a little bit more digging, we'll link it in our show notes. So there is a statement in the report that I read, which I'll read out loud, which was, AI can quickly amplify a controlled risk to a chaotic level, potentially derailing an unprepared business. So talk me through this. What is a chaotic level?
[00:09:50] Speaker A: Sure. So when AI is referring to a control risk to a chaotic level, it's basically pointing towards the inherent characteristics of the AI algorithm. So those specifically based on machine learning and deep learning. And so when you're relying on that, it can lead to unpredictable, nonlinear, and at times, exponential escalation and risk scenarios. So there's a couple of key examples of where you see this might happening. One is scale and speed of AI operations. There's so many processes that are happening concurrently. There's so much computational power that's going into these systems. It's far beyond any type of human capabilities. And so once an error proliferates in that environment, it can go extensively and rapidly.
A great example of this would be an AI driven trading algorithm, and one small error can cause it to go rogue. And then next thing you know, you end up with a significant market disruption. And this all happens before a human operator can actually intervene. Another area where you can see this chaotic level, controlled risk rise to a chaotic level, is around black box systems. So complexity and opacity is what it is. And so when you start looking at certain types of AI models, they're oftentimes considered in quotations like black boxes, and it's because they're incredibly complex. And in some of these models, they've got millions of these interconnected modes. And so when a minor bias makes its way into the model itself, it can end up manifesting in these large scale and unforeseen ways. And one of the problems, especially when you start looking at bias, is that no matter what type of controls you put in place to reduce bias, it always tends towards back towards that initial bias. So where that goes is that when you're looking at these uncontrolled risks, the bias may become uncontrolled at any point in the lifecycle of that model. Another place is data and feedback loops. And so these AI systems are just constantly learning and they're constantly evolving, and they create these feedback loops. And if you have a bias or an error that makes itself into the model, it can magnify. And so a great example would be a recommendation system. And when you build these recommendation systems, the initial bias might go towards a specific kind of content, which means that there's a potential that the model may skew towards a specific type of recommendation, which actually is bad, because it ends up overexposing the user to a very narrow piece of content. And when you see these echo chambers, it can end up leading to things like radicalization and then interconnected dependencies. These are incredibly large interconnected ecosystems. And so a risk in this one system can quickly cascade into the risk of a bigger system. So as an example, a data quality issue or security flaw can rapidly transmit itself across networks and then ultimately lead to these systemic failures. And then finally, is adverse aerial attacks and model exploitation.
So we've talked about this a little bit before. Attackers can go after the models themselves, or they can use these systems as part of their attacks. And so if the attacker has any type of knowledge of the inner workings of that system, they may be able to exploit it by putting in deceptive input, causing the algorithm itself to misreport or leak information as well. And so those are five areas that really kind of talk you through the area of controlled risk and chaotic level.
[00:13:40] Speaker B: Wow. Really appreciate the detailed examples. I always love a detailed example. So going back to, I want to dig into a few of your examples specifically, though. So you mentioned before, bias into the model. So how can these models, these algorithms, remove the bias? And how do you know if there's bias? Wow.
[00:13:58] Speaker A: Those are amazing questions. And so there's so many different places that bias makes its way into the model, starting with the construction of the team. So if you're designing an application and the application is going to service this group of people and you do not have any of those people reflective in the entire design, implementation and execution of that model, bias is going to make its way into the system. So first and foremost, the team itself. You start to look at the team construction. The next is the way that you label data. There's bias in labeling, and that's pretty important. And so making sure that you have a set of labeling standards that help people understand their own biases when they're labeling is a very key thing as well. So, yeah, there's a lot of different areas that you can kind of walk your way through.
[00:14:51] Speaker B: So what happens if someone doesn't label something then correctly? Does that mean we have this algorithm that 100% is biased then no.
[00:14:57] Speaker A: And I want to make sure that I hear the question. So if the team itself is not diverse, are you saying it was the question that there would be bias?
[00:15:06] Speaker B: Yes.
[00:15:06] Speaker A: The answer is yes.
[00:15:07] Speaker B: Okay, so then what about your first example on the trading algorithm that could lead to a chaotic level? Is this something that you think is definitely going to happen? All these examples, do you think that this potentially could be a thing if they're not already a thing? I don't know of every single case, but I'm just making conclusions on. So surely the example that you gave, there has to be something that goes out of control here.
[00:15:34] Speaker A: And in all five of the cases that the five different areas that I gave, things can and will go out of control. These are specific risks that were highlighted through the report and through the analysis. So 100%, you would expect these five different areas to manifest themselves as chaotic risks.
[00:15:54] Speaker B: So how do we put the genie back in the bottle? I guess it's out now, but how do we control these risks? I think the problem with this is, and I don't expect you to have every answer, it's just more so. There's a lot of uncharted waters, a lot of things that as the industry across the globe doesn't really know. I think people are still analyzing it, et cetera. So again, it's just more. So how do we control this genie?
[00:16:21] Speaker A: Yeah, well, there's a couple of things that, again, and I'm going to keep speaking from a corporate perspective, what companies themselves can do, but there's a couple of key things. One is that you've got to look at robust AI governance structures, you've got to look at establishing an AI accountability framework that talks about things like transparency and ethical usage. This is a key one. Enhance model explainability. There's this really amazing concept in AI called explainability, where at its basic it's that every single part of the entire lifecycle of an artificially intelligent ecosystem should be explainable to the layperson, every single piece, and explainable in multiple dimensions. I should be able to explain it from a roles and responsibilities perspective. I should be able to explain it from an accountability perspective. So making sure that you design your systems in a way that they're explainable is going to be absolutely key. Rigorous testing and validation. So making sure that you're testing systems for vulnerabilities, biases and performance is going to be really key. And in fact, I highly recommend people looking at Mitre and their Atlas tool. It is a tool that, very much like the Mitre attack framework, breaks down all of the different techniques and tactics that are involved in attacking machine learning systems.
You have, what's it called? Os top ten machine learning. So they've got their machine learning, top ten vulnerabilities, and they also identify how you can remediate and mitigate those. So making sure that you understand vulnerabilities inside artificially intelligent systems and that you test them is going to be very key. Building in redundancies and failsafes are going to be very important so that you understand, if you start to see these anomalies, how can you include a manual, a human manual override to make sure that those mechanisms are ultimately in place, and then continuous monitoring these systems, because they're constantly learning, you have to constantly monitor them and make sure that these systems, both before deployment, after deployment, while they're learning and so forth, that you're monitoring them to see whether or not you've identified any unexpected behavior and results. And so those are just a couple of key things that you can do to mitigate the areas that we covered before.
[00:18:46] Speaker B: So what happens if a company can't explain it in all of the dimensions? Is there ramifications? Is there repercussions? I'm assuming that there are.
[00:18:54] Speaker A: It's not so much that there are repercussions. It means that you're not necessarily going to understand, say, for instance, a risk. You're not going to understand necessarily how to apply a specific control to a risk. If one of the controls that I have is that I have to eliminate bias, but I don't have a way of explaining that, then that's a problem, but there's a bigger problem when it comes to explainability and AI, and that's the EU just recently came out with their AI law. It is an impressive and comprehensive set of documentation around everything that you can think of around protection, defense and privacy dealing with artificially intelligent systems. And there are a lot of requirements in that law that talk to explainability. So understanding explainability and making sure that you design your systems from beginning to end such that they're explainable is going to be required by law if you're dealing with a specific type of information.
[00:19:53] Speaker B: So let's maybe dive into the report a little bit more. Now, we're obviously not going to cover all of the things because there's a lot of stuff in there, but maybe one of the insights that I saw, and the title was diving in even without policies. So what does that mean? And then what does that look like?
[00:20:11] Speaker A: Yeah, we were talking about that a little bit earlier. That is just investing in this technology without necessarily having the training or the capabilities in place to deal with the risks, without having actually defined the risks, without having identified what appropriate, acceptable use of the technology. And there's just a couple of key statistics that underpin your question. One is that 54% of all organizations report that there's no AI at training at all, and then 41% identified that there was an inadequate attention to ethical AI standards. And that ultimately suggests that there's a gap in policy development concerning ethical implications. So almost half of the organizations investing don't have any ethical standards, and over half of the organizations offered absolutely no training at all to this incredibly powerful technology. And so there were a couple of key areas that when you start diving in and really understanding what it meant by without the policies, one was there's this rapid integration of tech without having any type of governance frameworks. So the technology, look, in order to maintain competitive advantages, oftentimes you had to, if you had to do very complex analysis very quickly, you were pushed oftentimes into using these technologies. But the problem is that you didn't have any formal governance in place. And so that meant that everything from potential inconsistencies in model performance, untracked changes to the algorithms themselves, failure to diagnose significant issues or biases, all of that gets lost because you don't necessarily have a governance framework in place. Another one is standard operating procedures. Oftentimes people, organizations are not actually documenting it. We don't have formal mechanisms in place that show the models access or deal with specific types of information, or oftentimes even classify the type of information that the system is handling. So there's really no Sops in place as well. We have unchecked model deployment, so organizations will dive into AI, not have any policies around model validation or the deployment strategy for the model itself. And so what ends up happening is that the models themselves aren't necessarily fully tested. And when you start looking at models that are making decisions for you, financial decisions, health care decisions, like I said before, life safety decisions, this becomes incredibly significant and then ethical. So because there is no real governance in place, because we just dove in, oftentimes you're ignoring the ethical and the privacy considerations. You're not necessarily looking at the data that you're handling. And from an ethical perspective, you're not looking at what is the societal impact to this application that I'm actually putting in place. Is it actually helping the people that it needs to help or is it causing issues? We're not necessarily tracking that, making sure that we're using it in an ethical way and just overall lack of lifecycle management. So because we don't have that full lifecycle management fully deployed, that means that from the management of the platforms, the ongoing maintenance models themselves, all of that stuff, at the end of the day, the models start to decay and you end up with significant issues with your data. So those are just a couple of key areas that tie into understanding that when we say diving in without policies, it really means that we didn't think about how we were going to govern this technology as it came into our environment.
[00:23:59] Speaker B: So going back to the ethical standard side of things, and you said, is it actually helping? But isn't that sort of subjective? Anyone's going to say, oh, well, of course it's helping, when if someone looks at it objectively, they may be like, well, not really. So how does that look? And can you define a little bit more like an example of an ethical standard?
[00:24:18] Speaker A: Sure.
First and foremost, you have to be able to define whether or not it's actually helping the population that you're addressing and that you're targeting. So by having a standard in place that says, hey, this is how we look at the societal impact. This is how we are proving from a quantitative perspective that it's actually helping the organizations by doing an impact analysis and a risk assessment. Those are all things that you can use to determine whether or not at the end of the day it's causing harm. So there's actually a process that you can build around that, an ethical standard. And I'll give you a good example of where you'll see ethical standards, you'll see them in areas like your acceptable use policies. And so ethical standards are going to talk about your usage guidelines. So what are the permissible and prohibited uses of AI? And I'm going to go right back to the EU AI law. They specifically identify certain things that you are and are not allowed to do with, say, for instance, biometric data, as an example. So in an ethical standard, you have to be very clear in terms of what is allowed, what isn't allowed, what's prohibited in terms of company policy, but what's prohibited in terms of law, you need to be able to define that. An ethical policy would define what are the acceptable data sources and the data handling practices for those data sources. So as an example, are we using data that has racial implications in terms of the way that it was acquired or collected?
Are we 100% sure about the lineage of the data that we're acquiring? So understanding the history of the data, the way that it was collected, the way that it was handled, and then identifying the acceptable policies around that is absolutely key. Making sure that you define what's illegal, what's unethical, what's unauthorized in terms of the use of AI. So there's one thing in terms of you developing it in your organization. There's another thing in terms of the way that you use it as a tool in the environment as well. Bias and fairness. You identify guidelines in place so that you protect, you prevent, you detect, and you address bias in the models. And then I'll just end on this one, because there's a lot but transparency and accountability, you make sure that as part of the policy, you've defined out what are the expectations for transparency in your entire AI operations? How do you intend to document transparency? And how do you make sure that you've identified very clear lines of accountability for your algorithm? To make sure that at the end of the day, like I mentioned before, the algorithm does what it's supposed to do and does not cause harm to either the group that you're targeting or the greater society. So then I also just want to.
[00:27:08] Speaker B: Go back a moment, Janae, and talk about the training side of things, which is a great point. But then would you also argue that, and of course, AI was around before chat, GPT was more mainstream and ubiquitous in what, November, December last year. But if we sort of go from that time frame, wouldn't it be hard to sort of develop training because it's still relatively new, right? Like, of course AI was around before this, but let's focus on this being more mainstream now and more ubiquitous. So I'm not surprised that there's not know rigorous training already.
[00:27:43] Speaker A: So I think I was surprised that there was no training in place before the technology comes in. I mean, think about it. When you're inside of a company, they train you. If a new social collaboration platform comes in, you brought an artificially intelligent ecosystem into your environment and there's no training.
I was definitely surprised by that metric. But look, at the end of the day, there's opportunity there, and so companies can start now by talking about the basics. And it really just depends on your role. If you are responsible for at all the secure development and design of these algorithms, there's going to be one set of role based training that you receive. But if you're a general user that's using a large language model as part of your day to day responsibility, there's a separate set of training that you would get, and then I would take that even a step further, that people need to understand when they are being targeted by an artificially intelligent system. So when I'm being attacked, we're not going to be able to look at the attacks of old and see misspellings and things that were, as an example, a trigger for a phishing email. They're going to be far more sophisticated. So understanding when you are being targeted by a system that's being backed by an artificially intelligent system is another form of training that people are going to have to get, as well as data handling. So how do I handle the data going into and coming out of these systems? So, no, I would say that this is a really unique opportunity to get AI based training across all of the different layers of your organization based on their roles.
[00:29:20] Speaker B: So why do you think there was no training at all? You're sort of saying, do you think it's just that people didn't think about it, they didn't have time, they forgot about it. Was there any sort of conclusions that you would draw upon that insight?
[00:29:33] Speaker A: I think it's a combination of all of the above. I think people weren't sure how to train and what to train. I think there's potentially a fear that the training had to be very technical, that in order to train the workforce, we needed to get into fat mathematics, which obviously isn't necessarily the case. Believe that everybody's still trying to figure all of this out. There's one law, there aren't really any AI based frameworks out there. You've got some risk playbooks, you've got some real good guidance and recommendations, but not necessarily a framework around end to end. How do you design and operate a secure, compliant AI system? So because the industry itself hadn't necessarily come up with a framework for it that companies weren't able to keep up, at a minimum, I believe it comes down to data classification. It comes down to that because we struggle with the way that we classify the importance of data. It's difficult oftentimes for us to train on things like artificial intelligence, because you have to be able to tell people what they can and cannot do with different types of information, which means that you have to have categorized and classified that as well. So I think that data governance problems inside of a company are also triggering these problems.
[00:30:54] Speaker B: There's also another insight in terms of the headline, which was risk and exploitation concerns, and from my understanding, 41% saying that there's not enough attention being paid to the ethical standards. The thing that I asked you about before, I mean, 41% is pretty high. Yeah, it is.
[00:31:12] Speaker A: And it's because, again, oftentimes when you see something so viral, in terms of the way the adoption rate, especially the large language models, was so rapid, seeing such a powerful technology come out without having clear guidelines in terms of what is ethical and what is not, you have a bigger problem. And that means that, again, these are learning systems. So at the very beginning, they learn those biases. And it's really hard to retrain a system so that it can operate ethically. If I've already trained data or trained an algorithm on data that's tinged in, say, for instance, again, racial disparity in terms of the way that it was collected, it's too late. Trying to untrain the model so that it no longer learns based on that is serious, it's not going to be.
[00:32:09] Speaker B: Able to do it.
[00:32:10] Speaker A: So over time, it has learned. Now, this bias, it's hard coded itself into the model, essentially. So, yeah, not having ethical standards in place means that all of this data that's being ingested, all these learnings and insights, is based on problematic information labeled incorrectly.
[00:32:30] Speaker B: So there was another insight then, just off this point, job displacement, and then sort of the skills gap. Now, look, I understand that I interviewed someone, I don't know, two or so years ago. It was about a book that I read around AI coming into our society and how it would displace people, but not in the way that people think about AI, sort of in inverted commas, taking out people's jobs or whatnot. So talk me through this, because, look, the way I sort of see, it is, I do understand, but it's also going to provide people that are maybe doing more manual related jobs, if AI can do it, that they can do more strategic jobs, for example. But if you go back even hundreds of years ago, we have to evolve. Like blacksmiths are still not jobs anymore. So of course we've evolved since then. But do you think people's fear comes from, well, we don't know what we don't know. We don't know what the future looks like, because when we look back, even when the Internet came out, people are worried about that. But look how many jobs it's generated for our society. So do you think this whole people being worried about jobs displacement is a myth or do you think it's a fact?
[00:33:38] Speaker A: So there are jobs right now that are being automated away. So I don't want to understate that, that there are people whose positions were to do a lot of the manual work that can be done by these algorithms. However, what I've also noticed, and this actually goes across every single job spectrum there is, including, say, for instance, people who are working on shop floors or people who are operating in blue or pink collar jobs and so forth. And that is that these systems can augment the jobs that they do. If I'm a field service installer and I have artificially intelligent system next to me, I can start it asking questions about the installation I'm about to do, about the wire that I see, about the design or the problem that I'm having.
[00:34:28] Speaker B: Right.
[00:34:28] Speaker A: So I'm able to actually problem solve with these systems faster by having, in this case, a large language model alongside me. So from what I'm seeing right now, it is transforming people's jobs. And the key is that people need to be able to adapt very rapidly in terms of their skills. And one of those skills is understanding how to work alongside an artificially intelligent system.
[00:34:53] Speaker B: There's a skill to it.
[00:34:54] Speaker A: It's not very easy to all of a sudden, as part of your job, work alongside an AI. And so understanding how to talk to it, how to phrase the questions, how to actually critically think to get to the answer that you're targeting, that's actually a taught skill. And so those are the types of skills that I'm seeing kind of missing. But at the end of the day, no, I would argue that AI is transforming a lot of the jobs. Those jobs are still going to be here. They're just transforming those jobs. But there is a class of position that is absolutely getting automated away.
[00:35:28] Speaker B: So where do you think we go from here, Janai? So obviously, if you guys conduct this report next year, those stats will probably change, because again, this is still relatively new territory for people. I understand that, but what's your view on AI? Generative AI, how people are working with it, interacting with it, training, all of the things? Just interpret that question however you want. I'm just curious because again, we don't have all the answers, but it's about having conversations like this that perhaps can get us closer to figuring out the best way forward.
[00:36:01] Speaker A: Yeah, and thank you for allowing me to answer that question. So for me, I'm incredibly excited because I see this incredible opportunity for people to be able to transform their lives because they're able to get to their answers a lot faster and ultimately solve problems a lot quicker. So to me, this is a very exciting time, but when I started looking at the report, there were three big gotchas that kind of scared me. One was, again, the high usage of AI, but not having adequate governance in place, that's dangerous. So we've got to start getting frameworks in place. We've got to start getting the right policies in place. We have to start getting the right control structures in place. Right now there's no audit framework in place that says, hey, this is how you assess the security and the privacy of an artificial intelligence system. So getting governance is going to be absolutely key in this case. It absolutely starts from the top. Underestimation of the way that amplification of risks, I think, is a very key one. You dove in on the controlled risk into a chaotic one. Understanding that these things happen at velocity and scale requires us to actually train our people to respond in a different way, very similar on the cybersecurity side to the way we trained people to deal with worms, which is all predicated on speed. The way that you train your teams to be able to detect and stop these types of errors.
[00:37:34] Speaker B: Right.
[00:37:34] Speaker A: These uncontrolled errors is going to be key. And then the ethical training, which I would say at this point is neglect. The fact that we have ethical and training neglect in these systems, it's concerning because again, I'm taking those ethical problems and I'm imbuing them into the models themselves. But I'm going to end with, to me, there's some real great opportunities here when it comes to training. So all organizations have this real good opportunity to start looking at how do I retrain my existing people so that they can operate with artificially intelligent systems? How can I make sure that my people are capable of transitioning from one position to the next and understand how to be able to use AI, to be able to do that. If one day I'm in one position, and another day I need to be able to take on another position, which is our future, how am I able to use these systems to be able to rapidly upskill myself and transition into these new roles?
And then on the cybersecurity side, we're just scratching the surface in terms of the way these attacks are executed. And so really diving in and understanding what an attack looks like against an artificially intelligent system, on the defense team side, making sure that they understand how to build the patterns, to be able to detect that. And then from a defense and a forensics perspective, how do I respond to these types of attacks? How do I contain them? And then how do you do a forensics investigation on an artificially intelligent system that's been breached? All of those are opportunities for us to learn. And then on the GRC side, understanding control structures and how to analyze risk with AI systems are all opportunities for us to be able to learn and transform our jobs.
[00:39:22] Speaker B: So, Janae, if there's one thing super quickly you'd like to leave our audience with today, what would that be?
[00:39:28] Speaker A: You know what I would do? I would read the EU AI law. We translated it to about 2000 lines. There's a lot of documents out there that kind of describe it. But I absolutely would sit down and read the EU AI law because it is the most comprehensive document around everything that you would need to do in terms of protection, defense, and privacy around AI systems. So that would be my recommendation to start there.
[00:40:01] Speaker B: Thanks for tuning in. For more industry leading news and thought provoking articles, visit KBI Media to get access today.
[00:40:10] Speaker A: This episode is brought to you by.
[00:40:12] Speaker B: Mercksec, your smarter route to security talent.
[00:40:15] Speaker A: Mercksec's executive search has helped enterprise organizations find the right people from around the world since 2012. Their on demand talent app position team.
[00:40:25] Speaker B: Helps startups and midsize businesses scale faster and more efficiently.
[00:40:30] Speaker A: Find out
[email protected] today.