May 03, 2024

00:42:06

Episode 254 Deep Dive: Bob Huber | Deep Fakes and Election Interference: Tackling the Threat of Manipulated Content

Episode 254 Deep Dive: Bob Huber | Deep Fakes and Election Interference: Tackling the Threat of Manipulated Content
KBKAST
Episode 254 Deep Dive: Bob Huber | Deep Fakes and Election Interference: Tackling the Threat of Manipulated Content

May 03 2024 | 00:42:06

/

Show Notes

In this episode, we’re joined by Bob Huber (Chief Security Officer and Head of Research – Tenable) as he delves into the pressing issue of misinformation on social media. From the impact on critical situations like elections and natural disasters to the proliferation of deepfake technology, we explored the difficulty of discerning authentic content. Bob shared insights on the challenges of identifying and combating misinformation, emphasizing the need for international norms and proactive measures.

Robert Huber, Tenable’s chief security officer, head of research and president of Tenable Public Sector, LLC, oversees the company’s global security and research teams, working cross-functionally to reduce risk to the organization, its customers and the broader industry. He has more than 25 years of cyber security experience across the financial, defense, critical infrastructure and technology sectors. Prior to joining Tenable, Robert was a chief security and strategy officer at Eastwind Networks. He was previously co-founder and president of Critical Intelligence, an OT threat intelligence and solutions provider, which cyber threat intelligence leader iSIGHT Partners acquired in 2015. He also served as a member of the Lockheed Martin CIRT, an OT security researcher at Idaho National Laboratory and was a chief security architect for JP Morgan Chase. Robert is a board member and advisor to several security startups and served in the U.S. Air Force and Air National Guard for more than 22 years. Before retiring in 2021, he provided offensive and defensive cyber capabilities supporting the National Security Agency (NSA), United States Cyber Command and state missions.

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: At first glance, it's a simple question of, like, what's the threat of AI and deepfakes? And when I thought about that, I was really thinking about the ability to spread misinformation or interfere or influence. Right? And then the more I spent thinking about the question, I was like, you know, it's deeper than that. It actually affects my ability to trust the misinformation. That's just the effect of what these things could have in general. But now I find myself questioning almost everything I see now because I'm trying to figure out myself, like, is this real or is this generated? Is it synthetic, manipulated? And I think that's a deeper issue for us. [00:00:39] Speaker B: This is KBC. [00:00:41] Speaker A: Are they completely science as a primary. [00:00:43] Speaker B: Target for ransomware campaigns, security and testing. [00:00:46] Speaker A: And performance, sustainability, risk and compliance? [00:00:49] Speaker B: We can actually automatically take that data and use it. Joining me back on the show is Bob Huber, CSO and head of research from tenable. And today we're discussing threat of AI generated deepfakes in general elections. So, Bob, thanks for joining and welcome again. [00:01:07] Speaker A: Thank you so much for having me. [00:01:09] Speaker B: Now, this is a really big topic, and I was super excited to do this interview. And I'm not just saying that because, you know, AI generated deepfakes are emerging pretty rapidly. So let's sort of maybe start with that first. What is the sort of threat? And I asked that. I know it sounds basic, but maybe people aren't as acutely aware of the threat as have similar. You. [00:01:31] Speaker A: Yeah, at first glance, it's a simple question of, like, what's the threat of AI and deepfakes? And, you know, when I thought about that, I was really thinking about the, the ability to spread misinformation or interfere or influence. Right. You're moving someone possibly to action. And I really thought that was it. And then the more I spent thinking about the question, I was like, you know, it's deeper than that. It actually affects my ability to trust the information I'm receiving. That's what it really comes down to, the, the misinformation. That's just the effect of what these things could have in general. But now I find myself questioning almost everything I see now because I'm trying to figure out myself, like, is this real or is this generated? Is it synthetic, manipulated? And I think that's, that's a deeper issue for us. [00:02:14] Speaker B: Yeah, you're right. Now, I'm going to start with something super basic. So Instagram, there's conversations floating around on, you know, they, like, don't believe everything we see, on the Internet, which is true, there's certain applications out there where you could make yourself look like a Victoria's Secret model, et cetera, even if you weren't look like that in reality. So that's the start of the, to some degree, the deepfake starting to sort of come into social media, for example. But then yesterday someone sent me a AI generated sort of a video of Joe Biden saying something. And again, it was obviously not real. It definitely looked fake. It didn't look legit or real. But that's just the start of where it's going to get to. So obviously there's a spectrum to this and we've seen it creep in over the years, but now it's getting to the stage where to your point, we don't know what's real, what's fake. How does that sort of look? Are we going to be questioning everything? Are people going to be questioning this interview? Is Bob Huber really Bob Huber? Is that really Chris Breen? Who knows? [00:03:15] Speaker A: KB I was just wondering if it was you myself. So I get it. I think the, you know, when it, when it comes to deepfakes in general, if it's something that like, you can see, there's a little bit higher chance that you can actually discern something's not right. Like a glitch here, a glitch there. But I think you nailed it. The technology, unfortunately for generating this, has improved over the past few years that even that's gotten harder to discern. That's my concern. Even somebody who's a security and risk management professional, it's difficult for me to discern that. Right. So unless it's just blatantly obvious, but if you're taking real clips and slightly manipulating them, that becomes, I think, a challenge for most folks in the audience. And, you know, I would hope, you know, technology companies would actually start building something in that makes it easy to identify this stuff. But that only goes so far. So, for instance, if it's a robocall related to an election, you know, and that's not going through a social media platform or any type of filtering software, that's coming straight to you, and now you don't even have anything to see. So I think it would be even harder if you had a robocall coming in that was some type of deep fake. It'd be really hard to discern that, in my opinion. [00:04:22] Speaker B: I want to get into the elections in just a moment. But before we do that now, I had actually, your senior researcher from Tenable Satnam Narrang on the show. I think we were talking about scammers back in the day and all of these romance scammers that exist. One of the things we were saying around identifying the glitch. So, for example, I think the, the exact example satin gave was, you know, if you're trying to scam someone out of their whole life savings as some romance scam, and they're saying, like, I'm having dinner tonight. But in the background, it was cute. It was, you know, obviously daylight, it's a very big glitch in sort of the story, but those things are more obvious and more apparent. But as you sort of were saying, it's going to get harder. So then what does that sort of mean? Are we going to be questioning everything? Doesn't it become exhausting? Like, how does that look, though, long term? What does that mean for sort of social media giants then as well, Facebook and friends? Like, are they going to be accountable? But then they say, no, we're not accountable. I know there's a lot of questions in there. I just think it's a big topic and I don't really know people really have the right answers for this. Hoping you do. [00:05:24] Speaker A: Yeah, I would agree with that, I think. Well, first of all, anything you come into as far as information, you bring your own bias. And unfortunately, now there's enough sources of information out there that I can pretty much filter down the only things I'm interested in. So, you know, if something that's manipulated or synthetic or deep fake aligns to my bias or my interests, I'm probably not going to pay attention as closely. Right. Because I already consider that probably a trusted source of information for myself. It aligns to the messaging that I would hope to see or expect to see. So I think I'm not going to question it as much. Now when it comes to the larger tech companies, and there's certainly some coalitions out there for, you know, trying to determine provenance and authenticity with Microsoft and Adobe and Google and others. But I will tell you, if somebody feels pretty well informed about this, I don't know what that means to me. Like, what am I looking for that's going to tip me off to tell me that, hey, this might not be real, right? And I consider myself to be pretty tech savvy, but how would I explain that to my mom and dad and my grandparents? I have no idea how I would go about doing that. And they're all on a computer just like me, but me trying to explain of, like, look for the watermark in the background or some other type of content protection or notice that thing might not be real. I don't know how I would even try to explain that to somebody. [00:06:37] Speaker B: What are we looking for at the moment? So if you're scrolling through social media, how would you delineate between, oh, that's, that's clearly fabricated. There's no, that's real or you're saying you really can't at this stage? [00:06:48] Speaker A: Yeah, I think, I think that's the issue. You know, like, I know meta, they had some technology out there, crowd tangle at one point that was, you know, attempting to identify this type of technology and there'd be some notification. They've since switched the technology they're using. And to be honest with you, I don't know what that new technology looks like or how it's going to appear in front of me so I can discern whether this is legitimate content or not. And now really, that's like what I call the coalition of the willing, like people who are actually attempting to identify this and relay that to their, to their viewership and their users. That's not the majority of organizations. Right. Because of the cost to do that. That's a cost to business to try and identify this content, tip off users or tried some notification or watermarks. So I think that becomes difficult. And then certainly on the social media side, I think where it becomes difficult is if you have, and I'm air quoting here, trusted sources of information, whatever that means to you, you know, there's some assumption, at least on my part, that, hey, they've corroborated the information. Right. It's, you know, verifiable. I can follow sources. I think by and large, you know, most, there's a vast number of people that don't do that anymore. As the information comes, there's a certain implicit assumption of trust of the information, and they don't go and look for the links to click on to verify the reporting and the information or where it came from. So I think it gets harder, and certainly in the social media realm, and certainly they're all snackable or bite size, so they're not really detailed long articles that you would expect to see significant research into. So it's like I'm scrolling the screen and boom, one story, two story, three stories in a matter of minutes. I think the general population isn't going to try and verify whether that's legitimate content or not, unfortunately. [00:08:26] Speaker B: Yeah. And that's where information warfare, for example, can get really dangerous, because if you're saying, oh, the sky's red. And then all of a sudden you keep seeing, you know, a million people on x or wherever saying the sky is red. You then start to believe it because other people are then saying it and you already sort of believe it, and then it reinforces it. So it's sort of a slippery slope. Wouldn't you agree? [00:08:46] Speaker A: Oh, absolutely. So what you're doing for corroboration now is like, is this true? You're waiting for, like, the likes on the social media platform? Like, the thumbs up of like, yes, I agree. Yes, I agree. For a bunch of other people who actually don't know whether the content's real or not, but, you know, it's a snowball effect, right. If you have enough people doing that, then you're more likely to believe, like, hey, you know, there's, you know, a million comments, they all seem favorable to whatever the content is or they've given the thumbs up. That in itself becomes the story. [00:09:12] Speaker B: Absolutely. I think I've spoken to this on the show before. So there was some studied and in times where in New York, and then just one person stood up, started looking nothing, and then two people and then three and then ten and 50 people just looking at nothing because, like, they just followed what the other person was doing. We are herd sort of people. We like to do what other people do. Makes sense, right. So the part that gets me the most is you are the CSO and head of research for tenable, which lud's, you know, security company. And even you're saying that it's even hard for someone like you of your caliber to detect. So imagine the average person. [00:09:43] Speaker A: Absolutely. And I think that's what's critical is when we talk about this technology in general and whether it's incorporated into the technology platforms of social media platforms, we're talking about training the entire population of the world that has access to this type of content. Right. And that's an intractable problem in my mind. It's like having a license to be on the Internet, which is never going to happen, and I don't promote that, but, like, having the ability to understand, like, what's real and what's not. What would the alerts look like if they actually came in? That would be difficult to do. So if you extrapolate that to things that are actually critically important. So if you're looking at, like, responses to, like, natural disasters and reaching out to the population in certain areas to incite some type of action of, like, you know, evacuations or take shelter or things like that, that has a massive impact, and some of those impacts can lead to harm, unfortunately. So it's one thing to have some, what I'll call some influence, and I know gets defined differently, some type of interference. But when it's. When those things incite some type of action and certainly that stuff which could be harmful, I think that's where it's dangerous. [00:10:47] Speaker B: Yeah, most definitely. I totally agree from that front. And then going to what's important to people is, of course, elections, especially in your part of the world now, as we know, there is an upcoming election. So what does this sort of mean? Everything we've just sort of spoken about deep fakes, people, you know, not understanding whether it's real or fake. Like, what does that then mean for this upcoming election? [00:11:08] Speaker A: Yeah, I think this is great for the adversary or whoever's using the technology to create the deep fakes or misinformation, because now there's a plethora of information, having just recently been down there and here in the states, as we come up with election cycles, there is no shortage of information about candidates and elections and all kinds of information on just a nonstop basis. Right. So I'm faced with it constantly. That material that's fodder for adversaries or people who do want to influence or interfere, right. So now they have information that can be manipulated. And given everything I've seen in Australia and here in the states, I've seen nothing in all the interviews and sound bites and everything else that would indicate to me, like, this is legitimate. So by the same token, the adversary is not going to say it's not legitimate. So the question is, if you have all this information being streamed at you nonstop, and it just ramps up as we get closer to the elections, how do you discern? And I think that's a slippery slope. Like I said, most people get information from a source they consider their trusted source, or at least representative of their interests. So you're already coming with a bias. Right. So I think for anybody who creates this information, you can take something, was in news today, tomorrow, spin out a story to whatever audience you're targeting, using the information that supposedly is legitimate information, slightly manipulate the next day and achieve a different result. And the one thing I think is fantastic in this space overall is people actually discrediting stuff that's actually real. Now, that just blows my mind that it's almost like an out of something can be out there. It's actually true. It might not put whoever that person, organization, or interest in a good light and now they can discredit it. They're saying it's, you know, synthetic or it's a deep fake of some type. [00:13:01] Speaker B: And then do people believe that to say it's fabricated or. Obviously they do. [00:13:05] Speaker A: Yeah, you know, it's a, it's a concept called Liar's dividend, and there's enough of it out there already that I think it holds some value there. It at least certainly brings question into the content, whether it's legitimate content or not. So if it puts you in a poor light, I think it certainly gives you an option to discredit the information. [00:13:26] Speaker B: So from your experience, what would sort of indicate something's real? [00:13:31] Speaker A: Yeah, there's, you know, there's lots of different things you consider and you kind of touched on a little bit. But, you know, based on background noise, certainly if it's video, does the background noise match what you're actually seeing in the video? Does it seem to flow smoothly? Can you take any cues in the, in the information being presented that would relate to time of day or even date? So if there's something in the information that allow you to certain, like, hey, is this, you know, a day old, two days old, ten days old, what have you, nighttime, daytime, and even looking into the crowd, you know, if there's, if there's crowds of people behind, does that appear to match what you're actually, what you're hearing in the, in the video or the audio? So, you know, when you're watching this stuff, I don't think short of you, and I use some old school terms here, you know, pausing it and rewinding it. So I break up my vhs days, you know, pausing and rewinding it. I don't know anybody who does that. You know, every now and then I read some stories of people who, you know, that's how they identify these things. But I think writ large, most people aren't doing that. You're taking the information as it comes, at the time it comes, and you're not pausing, looking at the screen, questioning, looking for dates, looking for time of day or anything like that. So I think it gets very difficult to try to discern that. Now if you come in questioning immediately yourself, you may do that. Right. But for me, I will tell you in general, there's definitely some news sources I go to regularly. My news sources, whether people consider them trust or not doesn't matter. It's my news sources. I have a level of trust and comfort with the information presented there. Now, if the information I receive from that source starts varying to some extent, then I would probably question it because it just doesn't match what I would perceive as what to expect from that organization. But that's, those are all really fine lines. I think that's just really difficult. And I know certainly in the states we see this, even the people who question some of the information that does come, then they get questioned. So that's why I say, if you go back to the original question you asked me is like, what's the threat of this type of technology and capabilities? It's trust. That's exactly what it is. It's trust. Because now I have to, I have to think really hard on what I'm trusting, who I'm trusting, why I'm trusting the information and ask a lot of questions. And I think, you know, that's a luxury for most folks. [00:15:40] Speaker B: Seems exhausting. Like no one does anything all day now because we're trying to trust and then questioning and saying it's fake when it's real, but just go back a step. You said about rewinding it. What do you ascertain from doing that again? Sorry, Bob. [00:15:52] Speaker A: Yeah, so, you know, if you're like, you're getting information in some form, the ability to get back and listen and look, you know, replay the message, look at the screen, pause it, make sure things make sense. Looking for any tips inside? Certainly if it's a video that would relay additional information of, like I said, you know, time of day, time of year, you know, nighttime, daytime crowds as a crowd match, you know, where somebody's pretending to be, as far as, you know, region that they claim they're in. So like I said, I will do that on occasion, but admittedly, even for myself, I will usually only do that if somebody else has tipped me off. Right. So somebody else questions is like, hey, did you see that? And then I might go back and look at it, but I'm just like everybody else, you know, I'm taking the, my information as it comes and I'm not questioning a ton of my information. And I guess now what I'm getting to is now I think I have to. So I'm, you know, I'm a paranoid guy anyway. I'm insecurity. So that's, that's kind of part and parcel with a job. It's probably going to lead me to question more information coming my way. [00:16:49] Speaker B: I mean, that's a lot of effort to go backwards and forwards and question things like, that's like almost digital forensics level for one video. [00:16:56] Speaker A: Yeah, that's exactly it. And that's, and that's why I wonder, obviously, the masses not only don't have the expertise to actually do technical analysis of it, by and large, people aren't going to have the time to go back and try to figure this type of stuff out. That's where I think those larger organizations and some of those coalitions, like I said, the coalitions of the willing, that's where I hope they create technologies in their platforms that do make it easier to identify. We all need that tip off of, hey, this is questionable content or whatever it might be that they come out. As far as messaging regarding synthetic content. [00:17:32] Speaker B: In terms of tip off, would you say the way, which people are phrasing things as well, like their words perhaps? What about even the voice? Because I've used an AI generated voice and it didn't really sound like me unless I'm tone deaf, but just, it didn't sound like me. But again, like this is just now. What about in five years? [00:17:50] Speaker A: Yeah. So I've heard some, in all honesty, and this is a person I know who did it. You know, they used their own voice and then they manipulated it and it was pretty close. The longer the message got, the more you were able to think, hey, this doesn't quite sound right. Like things like pauses in their conversation. Right. Or transitions in conversation just didn't sound quite smooth enough. But I think the technology, like you said, it's advancing rapidly enough that that's going to become harder and harder to detect that type of activity. [00:18:20] Speaker B: So with all these sort of deepfakes, hard launching really into the market, especially around now. And I know, you know, Americans take their elections quite seriously, a lot more probably serious than any other parts of the world, from my understanding. So I just need to ask the big question. With all these deepfakes sort of, you know, waffling around in this space, will this have a major influence from a campaign perspective to sabotage, to influence? What are your thoughts on that? [00:18:46] Speaker A: Yeah, I mean, if you're familiar with the US politics, I mean, we already have our parties pretty well defined and, you know, we have, you know, extremes on both ends of the political spectrum. So I don't think it's actually so much going to change, actually the results of elections in so much as it might solidify people's positions even more so. Right. So if you have a certain belief founded or unfounded, you're more likely to find information that's going to support your belief. And I just think that makes hard to dislodge folks from whatever their current belief is. So I guess what I'm saying is those extremes on both ends are probably more empowered and more emboldened. But really, do I think it changes overall from an election perspective? Not really. There may be some people that would be considered more moderate, where it may have some influence. And that'll be interesting to see is whether the moderate actually does make a difference in the US elections that are coming up. And that's usually what candidates in the states play to as whether we call them purple states or swing states or targeting moderates. And you think you can move them just a little bit one way or the other. It'll be interesting to see if that actually works. [00:19:49] Speaker B: Yes, that's a great point. Now, I'm aware of the extremity of both sides of the coin, but what about, as to your point, the people in the middle? Do you think it would help influence, though, for example, if someone were to sabotage and say, oh, well, the other party sucks or whatever and has all these deepfakes floating around, would that influence maybe the people in the middle or people a bit on the fence? [00:20:10] Speaker A: Yeah, I don't think we're going to know until after the elections come up. I think it's going to be really too hard to tell, just given all the information that's available out there of whether that actually does affect insignificant numbers to make a difference. So unfortunately, I think we're going to have some hindsight come after November of this year, I think predicting that prior. And of course, there's always polls and surveys been going on for quite some time already. I haven't heard as of yet any significant movements of those voters at this point. [00:20:38] Speaker B: And do you think there will be more sabotaging going on in terms of, okay, let's create these deepfakes to make it look like the other party is a lot worse, or we're trying to bolster what we're doing to influence these middle of the road people in terms of how they sit. So do you envision that will be part of maybe a more plan, a strategic plan that's maybe underlining? [00:21:01] Speaker A: Yeah, it's possible, you know, something like that, where on earth the connotations that would be pretty, received pretty poorly. So now you're talking about like, serious, you know, influencer interference in elections. I think that the problem though is anybody can create this content so it doesn't have to be done by a particular party per se. It can be done by anybody. So even small ideologists or others who have a certain belief that aren't even mainstream organizations, you know, from, from a numbers perspective, I think they'll have the ability to create some of that misinformation that may affect some outcomes. And certainly, and I do believe probably even more so, the more local it becomes, the more likely that is. Right. Because you're, I think those people have more skin in the game as far as the information they want to present. Because I'll tell you right now, like, personally, a lot of the information here in my local community actually does come through Facebook. Right. There's a lot of the groups organized there. And then you have to discern, like, you know, whether you believe that content or not. And trying to prove, you know, if, is it accurate information, I think at the local level, that becomes very difficult. [00:22:02] Speaker B: So how do we sort of tell people, hey, what you're looking at online may be fake, maybe fabricated, may not be real. Like, how do we, how do we even get to that stage where we're telling people that? And now just even going back before, you're saying, like, you know, rewinding stuff and looking at if it's true and looking at the setting like, all that, that takes a lot of computational power to even do that part, even getting to that. We got to get people there first. You can get to that part. [00:22:27] Speaker A: Yeah. Yeah. So I think, you know, as I mentioned before, you know, when big tech and some of the coalitions out there are trying to figure out how to address this, like, even for me, I couldn't tell you across all the different platforms what their notifications, warnings looked like. So I couldn't even tell friends, like, look for the following things. Like, like, I don't, I don't have that yet. So, you know, I always come back to, you know, a pretty popular saying here is if it's too good to be true, it probably is. Right? So if it, it aligns way too much to your beliefs or seems like that, just, there's no way. That's great. I love that. It's probably not true. Right? So it's just that it's that common sense approach, because I think, like you said, if you, even if you had something that's fairly, you know, poorly done from a deepfake capability of, you know, like glitchy or weird pauses or the atmosphere doesn't match or the weather doesn't match or something like that, I think most people aren't going to pay attention to that. Right. So I, so I think it really just comes down to common sense. But like I said, everybody who receives information comes with their own bias. And if my bias is towards a certain belief, I'm more likely to believe that whether it's true or not. And that's, you know, that that's the problem we have. Now, forget deepfakes in general. That's just a general problem we have. And that's what leads to such polarization. [00:23:36] Speaker B: Cognitive bias for sure. Now, you are right. The only thing is that people want to believe what they want to believe, and they'll maybe overlook things. So what I mean by that is on a popular radio show here in Sydney or Australia, now, they had this lady, she had fallen in love with some dude and the other side of the planet, it was clearly a deep fake video because she's like, no, he sent me a video. It was obviously fake. And I'm thinking she may have overlooked maybe certain glitches or characteristics of the video. Anyone would believe that it wasn't a great deep fake, but she had the belief that, no, the guy loves me and all that. So do you think, even to your earlier point, even if people believe something, and maybe it does look a bit suspect, people are going to overlook those things, perhaps because they want to believe what they want to believe? [00:24:23] Speaker A: Absolutely. You referenced Satnam's conversation with Satnam previously regarding all the scams he's covered, and that's the primary motivator for most of those, like, you know, I want to believe this is true. So I'm going to send money or whatever I, whatever I do as a part of the scam, and I think that will always continue, deepfakes aside. [00:24:40] Speaker B: So what are your vision sort of happening now post this election? What do you sort of think is going to happen in terms of outcomes or hypothesis that you hope don't come true, but in fact do? [00:24:53] Speaker A: Yeah, you know, we're in reactive mode and that's never where we want to be. Right. So when it comes to, you know, influence or interference or deepfakes or what have you, we're in reactive mode. You know, I've used this before. The genie's out of the bottle for this election cycle. For all those having election cycles this year, the hope is that, you know, between some of the coalitions at the g seven, the big tech coalitions, start implementing regulations in different regions of the world or different countries that would hopefully either introduce penalties or minimize the ability for these to exist without some type of notification. You hope tech builds that into their platforms. But, you know, I, you know, in my guess, too, unfortunately, I don't think that's like, in 2025. It even might be a couple years out. So I'm hopeful for the next election cycle that we would see some of those things actually be implemented from a control, I'm a security guy, so I'd say from a controls perspective of policy, process, regulation, technology, those are my controls. So I would hope to see some of those introduced. I have a feeling it might take a little longer than we anticipate in the aftermath of all these elections. There is no doubt going to be a lot of studies and analytics regarding whether this type of technology and misinformation have changed results around the globe. And I think, you know, the outcome of that is critical to understand. Like, how do we tackle the problem? Because I'm sure they're going to go to great lengths to try to figure, like, where was the content being provided? How could you tell? Because right now, like, I would love to say, hey, here's the checklist of all the things you need to look for. That would be great and easy. It still requires people to do something which I don't think is going to happen. So that's why I say, like, somebody needs to figure those things out and then as much as possible, build it in, you know, so hopefully we have some, some good intentions out there building this in lots of different areas. And then hopefully there's a stick on the other side where there's some type of penalty for, you know, those who are knowingly producing misinformation. [00:26:50] Speaker B: Well, who's this somebody who should be figuring this out, in your opinion? [00:26:54] Speaker A: I do believe the approach that we're seeing now of, you know, whether that's the g seven or some other consortium of organizations around the world, you have to establish, like, some type of international norms. So whether it's that group of countries or someone else, I don't know. But it's almost like, you know, we have, I'm air quoting here, you know, international cyber norms, if you will. And I guess that's arguable, but I think we're going to have to norms develop. So that's going to come in the forms of regulations and policies around the globe. So I think government does have a role to play in this. I think technology providers do as well, just like they do for now for privacy and security. Right. It's not wholly different from privacy and security around the world, right. There's governments and regions have stepped in and created policies and regulations and acts to address it. And big tech has also stepped up and done some of that as well. So as much as I hate to say it that's the best hope I think we have. [00:27:46] Speaker B: So just going back to social media platforms for a moment now, I remember Zuckerberg coming out and saying, well, we can't control every piece of content on the platform because I think someone said there's a lot of violence and things like that that people were seeing, that their kids were being exposed to or something. And obviously they've got these content curators, but look how many people are on the platform. Like billions. And there's so many pieces of content a day. It's very hard to, for AI, things slip through or it's hard for people to manually review it. They can't get to everything. Right. So how do you sort of police the deep fakes? Because how do you know whether it's real or not? Like, how do you get to that level of understanding where it should sit? [00:28:22] Speaker A: Yeah. So, I mean, the only way that can scale is via technology, right? So, you know, there's lots of technology companies have organizations that do attempt to go and police this type of information, but just the volume of it and has to be implemented in technology to address the bulk of the issues. And I think that's the only way we're going to be successful. But, you know, that's a tit for tat capability, right? It's just like security. You know, we find a new threat, we build defenses up for it, and they figure out a way to go around it. And I have a feeling we'll be in that cat and mouse game for the foreseeable future at this point. Like, I don't know that there is, there is no end stage where you win, right. It's just going to become commonly understood that it's out there. And hopefully we have better indicators to identify where this information is. And certainly for the layperson, understanding how we would identify, you know, information that may not be fully accurate, some education will have to take place for folks to understand, like, hey, this may or may not be, you know, factual information or maybe synthetic or manipulated. [00:29:22] Speaker B: That does make sense. The only part of it that I'm looking at now is, you know, people are going to come out and saying, oh, we need user awareness again, we're going to go back to that headspace around, oh, you know, it's on the user. But if we look at user, you know, security awareness, like, people can't even do the basic clicking on a link. And now we're asking people to become digital forensics and analyze a video to the nth degree to decide whether it's true. Or not. Like, I just don't think that's going to happen. [00:29:46] Speaker A: No, it's not. You know, it's just like any other awareness messenger campaign that comes out regarding anything and certainly stuff that comes from the government as well. You're going to have that percentage that he'd. Whatever the awareness is. And those adult. And you just made a great point. So if, you know, when it comes to security awareness and training, if I had to conduct a cyber phishing exercise, I know ten to 12% of people is average click rate, so ten to 12% will click it. That means they're probably going to get compromised. And that's an organization that actually specifically trained for it probably on a regular basis. So now you're talking about something. That training is probably going to be a lot less and there's no testing of it. I would expect the rates of folks who are able to do stuff like that as a very small number. And that's why I say, like, we've got to look just for scaling, for governments and technology companies to step into the fray. [00:30:35] Speaker B: Absolutely. And look, very soon we'll see that campaign coming out around all the user awareness. I just don't see that happening. You're talking about everyday people, they're not working in organizations. So it's going to be very hard for people to understand perhaps the techniques and they have the, you know, the knowledge to be able to do that. [00:30:54] Speaker A: Basically what I was saying is like, you know, government, big tech, I don't know about you, but I know quite a few people that don't trust either of those organizations. I'm not sure how successful that will be. [00:31:03] Speaker B: Well, you're talking about deploying a potential deep fake video on a potential social media platform that people don't trust anyway. So what is this sort of, like, where do we go from here? So, obviously, I know there's not, there's not a silver bullet to this, but what can people sort of do in the, in the interim? Like, how can we start to understand that this is here? You know, we've already been informed about other people that have had these dick fake scams. They've been scammed out of a lot of money because of it. So obviously this is not the end, this is the start. It's going to get worse. What would you sort of recommend with your experience and really looking at this from a research perspective? [00:31:41] Speaker A: Yeah, well, first of all, I think, you know, just having it in the media is a good thing. So it's no different than all the scam stories that come out and they make mainstream media all the time of somebody who was scammed out of their life savings and what have you. I think just being in the media works because I know folks who aren't cyber experts, like, they hear that, right? Because they'll ask me questions like, oh, you're a cyber expert. Did you hear about this? What do you think about that? So there is some value in that, being in mainstream media, right? I think that's extremely useful. [00:32:09] Speaker B: Well, then what about the regulation side? How do we get to a stage where you come back and two years elections done, and it's, okay, we've got regulation. How do you enforce that, though? Because the part that is interesting is with the Internet, it's not like, okay, I'm in Australia, you are in the United States of America. And if I do a crime like this treaty in place, then I get extradited back to Australia and I get prosecuted here. It doesn't really, there's no real sort of rules on the Internet. So how do we police this whole thing that's going on? And I know there's no real serious answer to this, but what do we do? Like, regulation on the Internet? Very hard to enforce. [00:32:45] Speaker A: Absolutely. So I think the approach is going to be very specific. So, you know, certainly in Australia, you have the electoral commission working regulations very specific to elections. Right. So, like, I think that can make sense and that's very targeted. But to your point, the Internet is quite large and there's every topic in the world out there. Yeah, I just don't think you can police them all. So, you know, as I said, hopefully establishing, you know, international norms for synthetic information or deepfakes or what have you. I think that's extremely useful. And sometime down the road, it may be possible that there, you know, as part of those international norms, there are some agreements between regions of the world or countries that allow action to be taken. So, you know, a lot of things develop that way when it's, when it's new to people. You know, we don't have all the regulations and laws in place to start with, and they get developed over time. And I see this falling in line with that, which is also to say, I don't think there's any answers anytime soon. [00:33:42] Speaker B: What about Zuckerberg and friends? Do you think they feel a level of responsibility? Because, you know, if I be honest, majority of the stuff is going to be deployed on social media like X and Facebook and Instagram. [00:33:53] Speaker A: Yeah, that's a great question. And, you know, he's in front of us Congress frequently. And, you know, this is, this is my opinion and only my opinion. I don't think that's incited a whole lot of change for how they operate. Not to say they're doing a good or a bad job, but he certainly gets called to the carpet enough regarding misinformation, influence and all kinds of other topics. And, you know, they have efforts to try and address some of these things. But at the end of the day, it's a business. And, you know, unless it somehow is an incentive for the business to do so, and there's no stick, to use a metaphor on the other side, you know, they won't really deal as much as they have to then. And like I said, that that's my, that's my personal opinion. [00:34:35] Speaker B: Yes, I have seen that over time. In terms of the stick side of it, what do you think is reasonable? Because even here in Australia, there's like, you know, there are fines and penalties, but as I've said before, like, you know, people just pull that out of the back of their Maserati, a fine schmoyer. That doesn't bother them. Right? Like, who cares? It's, you know, they've got more money than sent some of these people. So I guess that for us, it's like, well, I don't think the penalty in terms of monetary is even going to do anything. [00:35:01] Speaker A: Yeah, that's a great question because even from a privacy perspective, there's been some pretty significant penalties out there. I'm not sure that it's actually had the impact that they had hoped for as far as monetarily being enough to make a real difference. It certainly got attention and certainly made the media, whether the sticks are enough to really move the needle or not, I don't know. One thing I found in preparing for this, I went out and read some policies regarding synthetic media and manipulated media and deepfakes and x, you know, formerly known as Twitter. Some of their clauses they have in here really is more specific to misinformation or disinformation that leads to, leads to actual harm. And then they list out cases of what harm could be considered in most of those at that point. Now you're talking about criminal activity, right. And for criminal activity, there's already, you know, social norms in place and laws in place. So there's a line there of, like, what can happen that doesn't cross the line. That's, you know, influence or interference versus what crosses the line and leads to harm or safety issues. So there's, you know, so there are some things in place that capture that. But the problem is, I think a lot of these things are playing out in court, too. Right? So if you have misinformation and lead to some type of loss of life, safety or harm issue, many of those are still playing out. [00:36:12] Speaker B: So can you give an example of what that looks like in terms of their terms and conditions? Like, what does that look like? [00:36:18] Speaker A: Yeah, so they, I mean, essentially, you know, they claim to be monitoring the activity on the platform, and, you know, they're specifically calling out, like, incitement of abusive behavior to a person or a group, risk of mass violence or, you know, civil unrest. Like, those are the things they call out that they're looking for. Which led me to question, like, how well can they identify that? And I don't know the answer to that question, but, you know, I think these are great clauses they call out of. Like, here's the type of things we look for and we consider to be harmful, but how do you actually identify, like, who's doing that? Is that, is that a machine doing? Is a technology doing it? Is it people doing that as the people reporting it? Like, I don't know the answer to those questions, but I'd be curious to know, because as we discussed earlier, whatever that system is to do that, it has to be able to scale. Right? It's just a sheer volume of information. It has to be able to scale. [00:37:04] Speaker B: So do you think they are genuinely doing that, or do you think they're saying they're doing it or their intention is there, but they're not really doing it? [00:37:10] Speaker A: You know, again, my opinion, they say they're doing it. I think they do have some intention of doing it. I think it's proved much more difficult than what you would read on a policy. I rarely run across stuff in any form that I'm a part of that says synthetic content, manipulated content, or anything like that. And by the flip side, I've almost never seen anything that had some type of watermark or anything else that I could say was authentic. [00:37:32] Speaker B: But I feel like it's always defensible, though, for these guys. They say, oh, well, we can't control. We got billions of people on there. It's hard. And I get it's hard. I'm not saying it's easy, I get that. But then I always feel like, well, there's over a lot of accountability, perhaps, and these terms and conditions are written in a way which is a bit convoluted. There's a bit of gray area. [00:37:51] Speaker A: Yeah, that's it. And I don't think most people would even be aware of the terms and conditions that actually exist out there. I went and searched for them very specifically, and I understood that they had something similar to this, but the average person would not understand that, probably wouldn't understand how to report it or what to do if they receive something that might violate one of these terms or conditions. My guess is if they did, it would probably come through a law enforcement avenue. Right. So somehow it would lead down that path versus going back to a social media platform of some type and trying to report it that way. [00:38:21] Speaker B: So in light of all of this, what do you think sort of happens now? Obviously, we're coming up to the election, there's going to be a lot of things happening. What do you sort of envision over the next sort of six to twelve months? [00:38:29] Speaker A: You know what, I think we're going to see a lot more of it, certainly in forums where I think their ability to curate information is less, you know, whatever you want to consider that. But if they don't have capabilities to go out and curate information, determine whether it's factual or not, I think we're going to see a lot more of it. I think it's going to ramp up leading into the elections around the globe, no matter where you're at. And I think it's going to make it very hard for voters to be truly informed voters. So like I said, the genie's out of the bottle on this go round of elections. And then the hope is know hindsight. Sometime in the future, in 2025, people can look back and start to identify trends related to identifying misinformation, synthetic information, deepfakes and interference influence. And then we can formulate a plan of how to address that stuff. Because I think even now, with some of the governments considering, you know, how they regulate this, I'm not sure that all tools and capabilities exist for them to say specifically, we expect organizations to do the following things like, I don't know that there's a, there's norms yet that they could identify very easily other than to say, you shouldn't do this. [00:39:35] Speaker B: Do you think as well that because of all the stuff you just listed out, we're going to see more polarization? Because it's like, hey, I saw this video about you guys complaining about our party and vice versa, and will we see more of that then that sort of then transact into more physical crime and people, you know, getting really outraged that I've seen in, you know, even the last election, like are we going to see more of that? [00:39:59] Speaker A: I hate to say yeah, but I think so. Right. Because you're trying to instill beliefs and there's this concept of if your views far one way or the other, you may not bring everybody with you, but you'll bring them closer to you. Right. Your views are so outlandish that they may not believe everything you have in there, but it may move them, your direction just enough to make a difference. And I do think that's likely that it's more polarized and it's easier to create those environments. [00:40:25] Speaker B: So, Bob, is there anything specific you'd like to leave our audience with today? Any closing comments or final thoughts? [00:40:32] Speaker A: Yeah, I think, you know, for the general population, you just have to, you know, keep, keep your eye on this topic, right. It's an emerging topic for all of us, even folks who follow security or insecurity, like myself, it's an emerging topic. The technologies to detect and identify this type of information are not fully mature yet, so we have to go into everything with a little bit of caution. And it's not like anything else. Right. It goes back to what I said earlier. If it's too good to be true, it probably is. So. So keep that in mind and just be aware when you're receiving information of where it's coming from and those small things like looking for glitches and all those other things. I don't expect most people to do that, and honestly, I rarely do it unless I think whatever I'm watching just seems hard to believe. But it's just a space that I hope gets a lot of coverage in media. Just for that general awareness. [00:41:23] Speaker B: This is KBcast, the voice of cyber. Thanks for tuning in. For more industry leading news and thought provoking articles, visit KBI Media to get access today. This episode is brought to you by Mercsec, your smarter route to security talent. Mercsec's executive search has helped enterprise organizations find the right people from around the world since 2012. Their on demand talent acquisition team helps startups and mid sized businesses scale faster and more efficiently. Find out [email protected] today.

Other Episodes