April 02, 2025

00:40:32

Episode 301 Deep Dive: Ginny Badanes | Threats, AI and Influence Operations Around Elections

Episode 301 Deep Dive: Ginny Badanes | Threats, AI and Influence Operations Around Elections
KBKAST
Episode 301 Deep Dive: Ginny Badanes | Threats, AI and Influence Operations Around Elections

Apr 02 2025 | 00:40:32

/

Show Notes

In this episode, we sit down with Ginny Badanes, General Manager of Democracy Forward at Microsoft, as she discusses the multifaceted threats posed by nation-state actors around elections, particularly the use of AI in influence operations. Ginny highlights the critical need for society to adopt a healthy skepticism toward information, scrutinizing the trustworthiness of sources and the potential for AI manipulation. We delve into the activities of significant nation-state actors like China, Russia, and Iran in recent elections, and the emergence of AI-driven fake news sites used for propaganda. Additionally, Ginny provides insights into the deceptive use of AI beyond political contexts, including its impact on women and financial fraud schemes.

Ginny Badanes is the General Manager of Microsoft’s Democracy Forward program, an initiative within Microsoft’s Technology for Fundamental Rights organisation. At Microsoft, protecting fundamental rights means promoting responsible business practices, expanding accessibility and connectivity, and advancing fair and inclusive societies. Ginny’s team is focused on addressing challenges to global democratic stability, with efforts aimed at safeguarding open and secure elections, promoting a healthy information ecosystem, and advocating for corporate civic responsibility. In 2024, a key focus of her team’s work was raising awareness about the deceptive uses of AI in elections and combating these cyber and AI enabled threats.

Ginny has spent her career at the intersection of politics and technology, advising presidential and senate campaigns on leveraging data and technology. She was named among Washingtonian’s 2021 & 2022 “Most Influential People” list for national security and defense.

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: What we really need is to all develop that sort of critical skill where we stop for a moment and ask ourselves a few questions. Do we trust the source that this came from? Do we know where this originally came from? How possible is it that this has been manipulated in some way, whether with AI or just with deceptive editing? If we get to a point where as a society we are trusting but also skeptical, I think a lot of what sort of goes viral and a lot of the chaos that comes out of these kinds of campaigns, it'll lose its sting, it won't be as effective. [00:00:35] Speaker B: This is katycast as a primary target for ransomware campaigns, security and testing and performance and scalability. [00:00:44] Speaker C: And we can actually automate that, take. [00:00:46] Speaker A: That data and use it. [00:00:51] Speaker C: Joining me today is Ginny Bedaines, General Manager, Democracy Forward for Microsoft. And today we're discussing threats, AI and influence operations around elections. So, Ginny, thanks for joining and welcome. [00:01:03] Speaker A: Thanks so much for having me. [00:01:04] Speaker C: So Ginny, as you know, there was an election in your part of the world last year and this week in particular that we were recording this podcast interview. So I'm really curious to maybe zoom out and talk me through how nation states, state threat actors typically behave around elections. [00:01:25] Speaker A: Sure. Well, as you know, the US election was sort of the last of a massive global election year. And so that means that there were billions of people around the world voting in these consequential elections for prime ministers, presidents, for congress, parliaments, those kinds of elections. And so we had an opportunity to both work with election authorities and political campaigns and party committees around the world over the course of a year and also track the behavior of these nation state actors. So here's kind of what we saw, and it really culminated in what we saw in the US elections. There are really four main nation state actors that we tend to track in this space. North Korea, China, Russia, and of course, Iran. Now, I'll start by saying we didn't actually see much behavior from North Korea in this space this, this whole past year from an interference perspective. Now to be clear, that doesn't mean there wasn't any. We don't have perfect information, but we didn't see a lot of disruption from them. It might be that they were distracted with, with some cryptocurrency efforts they have underway. That seems to be their priority right now. So it's really more about those other three and they, they each were active in different ways. And when we talk about active and disruption within the context of elections, what we really mean is everything from influence operations to cyber campaigns, you know, attacks, and then, of course, hybrid, where they do a little bit of both at the same time. And all three were active. So, for example, we could get into more detail and questions if you have them, but at a high level. What we saw Iran doing was a lot of cyber activity, particularly in the US and they were pretty focused on going after President Trump and his campaign. There's some publicly available information about the fact that they were successful in breaking into some email accounts associated with that campaign. They tried some influence activities as well. They weren't quite as effective. Uh, they. They have an interesting new thing they've been doing where they stand up websites that are partly AI generated and that they're fake news sites used to sort of send some of their propaganda and messaging. We also saw Russia to be quite active. You know, to be clear, Russia continues to be very focused on Ukraine, and that's where a lot of their efforts are going. But they can do two things at once. So in addition to the efforts around Ukraine, they also were looking to interfere in several elections over the last year. And of course, the US Is one where we saw that as well, both from the influence perspective as well as some cyber activity. They did use some AI videos. Most of the videos they did did not contain AI, because those old tricks that they've been doing remain effective. And so there wasn't really, I don't think, a need for them to use some of the new tools. And then finally, China, also active this cycle. They tend to be more focused on espionage in these spaces, but they have started to get more involved on the influence operations side. One thing I would note that would be particularly of interest, I think, to those who are looking ahead to future elections is they did set up some accounts attempting to attack Republican House members. So these are. These are not, you know, presidential level. These are folks running for small offices, lower ballot races, and they weren't very effective, but it was interesting to see them sort of dip the toe in the water on trying to influence around local races. And we do believe that those were targeted around races where candidates had very specific policies that were not in line with what the Chinese government would want. [00:04:42] Speaker C: Okay, so all of what you were saying was so interesting. So one of the things I want to know first is everything that you're saying specifically around, you know, China, Russia, Iran. So would you say from 2020 election to the most recent election in 2024, in terms of the campaigning, do you think this last election had a lot more impact in terms of, you know, Nation state, influence, et cetera. You spoke about influence ops. Would you say that's increased significantly from the previous election? [00:05:11] Speaker A: You know, you use the word impact. So I do want to make a comment that's maybe not totally obvious when we talk about what we saw from an activity perspective is we're not sure what kind of impact this activity had. And so I always want to be cautious, not to say just because actors were involved in influence operations meant they had any influence. That often is not the case. Sometimes chaos and disruption is the point more than actually trying to influence outcomes or how people choose to vote. But from an activity perspective, we did continue to see in some cases sort of steady state of activity and in some cases a few spikes. And so again, from the China perspective, being involved on influence operations is somewhat new from what we have observed of them. And so that is new activity and more of it. It's not necessarily saying that that's what they will do in future elections. It was a very small operation that we observed, but it was worthy of noting just because that was not a typical activity. So there were some steady state, status quo kind of work that these actors were doing. Iran has had similar campaigns in the past. Combination of hacking into accounts and then sending emails to those accounts of one of those hybrid campaigns that, that was pretty consistent with what we're seeing them do this cycle. But it was different kind of activity, I would say, from China. [00:06:27] Speaker C: And so you said before Ginny, hacking into emails, accounts, etc. So is that what you mean by chaos and disruption? By those sort of, you know, hacking into email, sending emails, all those types of things? [00:06:38] Speaker A: Yeah. A lot of times we can't always know what these actors are trying to accomplish. We can tell when they target certain campaigns. So I mentioned Iran was targeting Trump and you know, it did appear to us that Russia was targeting the Harris campaign. First the Biden campaign and then subsequently the Harris campaign. And it's hard to know if they were, if their intentions are persuade people or to the point I made before, if it's just to create some kind of chaos and disruption in the system. I do think that is often the, the objective. And through hacking into accounts and then, you know, doing some kind of influence operation, a lot of times they are trying to distract people and to change the conversation. [00:07:15] Speaker C: So I want to go back to for a moment, you said fake news sites. So I've been in the space a bit over a decade and then I've always thought, imagine if, as you see on Facebook, like you can do Sponsored Sponsored ads, for example. Sponsored content. Imagine just setting up a fake news site, I don't know, saying that the sky is rainbow and then having all these fake people saying like, yes, it's rainbow. It sort of like gets into that information warfare that we see in terms of, you know, the whole debacle with Twitter a fair few years ago and why Elon Musk took it over, et cetera. But is this something we are starting to see more of? And this is really interesting in terms of people discerning between quote, unquote, real information, slash or misinformation. Going back to your example before around the fake news sites, what does that then look like in terms of do you think that actually drives people's influence, would you say? In your experience? [00:08:05] Speaker A: What I think is so compelling about the fake news sites is that it's a clever approach that, to your point, has been around for a while. This is not a new thing. In fact, it's not always just nation states that do. This has actually been a political activity, particularly one we've seen in the US where people create sites and they pick a local town and then they add the name Herald at the end, right. So it sounds like it's an authentic news source for. From people who aren't from that area maybe. And they don't realize that that's. The Savannah Times is not an actual news site. So to begin with, it's a really clever way of, of making someone think that they are reading trustworthy news. And that creates a mental space where they think that they're consuming journalistic content. And then from there, whether it's a political operative or a nation state operative, they will put out some real articles often and then they'll mix in some, some fake articles or, or they'll insert different information into existing and real articles. So again, this is a practice we've seen for years from a variety of actors. Some people refer to these as pink slime sites. And sometimes the purpose of them is actually just to get advertising and clicks essentially to, to make money. So as much as we look at the ways that people are doing to change minds and to change narratives, a lot of times it's also got a financial motivation. So you always have to sort of keep in mind that you're not always sure what the motive is of the, of the groups who spin them up. What's different about what we saw, this cycle in particular, is we believe the use of AI that allows these sites to spin up faster, that allows foreign actors who probably don't speak the native language very well to write articles that come across more believable or allows them to take real articles and edit them through AI to make them not easily, easily caught by scanning systems. And therefore they can create more content quickly. In the end, I think a lot of the actors, yes, this is one of the laundering mechanisms for propaganda and narratives that they want people to believe. And that's usually why I believe that these actors in particular have been doing this recently. But there, there's always a money motivation to consider because what we found is that as these sites spin up, automated ads get placed on them. The more volume that they attract, the more clicks on the ads. And then the people who are running those sites actually do make money. [00:10:15] Speaker C: Yeah, okay, so that's interesting because obviously people want to read stuff around election time, etc. So it may not be to influence. It may just be, hey, it's a quick way to get really high web traffic to be able to run ads to make money. Wow. And would you say, and I mean, you don't, you know, this is just what you, what you're speaking about. Would you say that at all any of these sites would have influence or. Not necessarily. It may have done as a byproduct, but not necessarily wasn't the main driver. [00:10:39] Speaker A: I mean, it's just I always want to be careful to assume intent because that's not the kind of intelligence we get. But I imagine that a lot of times it is to launder stories and narratives. And I'm sure it has had effect of helping build a narrative around something that an organization or a country is trying to get folks to believe. When we think about how a bad actor can get a story that is completely false or mostly false into the mainstream, where people who are not spending their time on the dark web will actually come across that information in a, in a way that it is believable. There's this sort of cycle that they go through to create sort of the laundry effect of, of the fake news of the propaganda. And it usually starts, I'll give an example of how we've seen Russia do this in particular. It usually starts with a video, a fake video. Again, sometimes AI edited, sometimes just regular video they filmed, often posted to Telegram, sometimes on X. Obviously there are other platforms as well. They post that and then they intentionally go and find Russia state media to run a story about the video. So again, not a lot of people trust Russia state media. So that's probably not sufficient for getting it into, into the bloodstream. So the next thing that they will do is go and find social media influencers who are connected in some ways to, to the Russian government or to this media organization and have them start to post on their social media about it. Through that they start to get unwilling, unconnected people who will see one of their posts or will read the article who will also start to amplify that content themselves. At that point there will be some slightly more normal, not nation state aligned media sites who will pick up on some of that social media chatter and they will write a real article about this story. And that's what sort of gives permission then for the broader media ecosystem to start commenting on, writing about and responding to that narrative. So it's this sort of laundering effect of a fake video that goes through this very specific process that ends up, and not all of them do by the way, but the ones that, that really resonate with people will often end up into a fairly mainstream news site who is really reporting on what they saw on social media, which was reporting on what they saw on Russian state media, which was intentionally reporting on a fake video. [00:12:53] Speaker C: Before we jump into the Australian side of things because we have an election coming up this year, as you know, I want to just hear your thoughts. Shinny around. A lot of people I've spoken to on the show recently have, you know, discussed, you know, the rise of AI and how real some of these videos currently are, but how they will become in the not too distant future and how that has an impact on discerning whether it's fabricated or if it's real, etc. So then what are your thoughts then on elections generally and how we move forward? Even if, you know, the election, you know, in the next couple of years for the U.S. for example, with the rise of AI and deep fakes, et cetera. [00:13:32] Speaker A: So at the beginning of the last year there was a lot of concern about this issue because of all of the big elections that were happening. There was a lot of chatter and conversations about are we going to see this as the big AI apocalypse? You know, are there going to be candidate deep fakes that are going to persuade people and how they vote? Is it going to be a catastrophic election cycle in the end? That's not what we saw. We did see the use of AI in ways that we hadn't anticipated, in some ways that we had. It wasn't used at a scale or it wasn't as effective to the point that it influenced or impacted the outcome of any elections as far as we know. And we can tell that's not Necessarily how it will always stay, though, because the technology is moving quickly and adversaries are finding ways to weaponize it. So some of the things that we did collectively across industry and with governments when we were focusing on this, first, we were part of a group of 20 companies in February of last year at the Munich Security Forum who signed an agreement that did a few things. First, we came together and said we see this as a risk that is emerging and could affect democracies around the world. And second, we are the technology companies who are building the products and innovating and distributing content, depending on the company. And therefore, we have a responsibility around this to make sure that we are being thoughtful. And then we put together a series of commitments that each of the companies followed through on in their own way based on what their products were. And that was a start to sort of acknowledge that just because we hadn't seen it yet does not mean that this might not have a massive disruptive effect on the population. What we actually saw from a AI intervention perspective was mostly deepfake audio files. So I do think most of us were expecting that we would see video. But what we really saw was that the most effective kind of deepfake out there is, is using audio. And that's in part because it is just there's a lot more content to build models on. It's a lot less complicated to replicate a voice. The other is it's really hard to detect. And detection of deepfakes is already quite hard. And I'm happy to talk about that more if you have questions there. But audio deepfakes are really hard to detect. So that's the one trend that we did see start to emerge that we're continuing to track. We're concerned about a couple examples of how we saw adversaries actually use audio deepfakes. There was this sort of prominent example of a audio call that President Biden made in New Hampshire to voters telling them not to vote. That one was a domestic actor in. In the US who's since been caught and I believe has gone to jail for that. But that was sort of the actual full text of the audio was created in AI. But how we've actually seen nation states deploy it, for example, is one thing that we observed was Russia had a video of Kamala Harris. It was a real video of a rally. She was really talking. But what they did was they spliced a very small piece of an audio deepfake, just one line that they put into the video in a way that was so subtle. That you couldn't tell without running some analysis that she hadn't actually said it. And it was a derogatory comment about President Trump and the assassination attempt against him. Now, that video didn't end up taking off, and people caught it fairly quickly and debunked it in a way that I don't believe people believed it. But it was quite well done and it was, we thought, a sign of, of things to come. [00:16:46] Speaker C: That's really interesting. Okay, so going back to your comment around, it's hard to, you know, detect deep fakes. Talk to me a little bit more about that. [00:16:55] Speaker A: Well, I mean, going back even 5, 6 years at Microsoft with our Microsoft research team, we saw this trend coming, right? We, we saw what AI was, was going to be able to do. And we had some really smart people in the company who were saying, hey, this might be a problem for democracies and elections. That was an area that people pretty quickly thought would be an issue. So we started working on a detection system because that seemed like the most obvious way to sort of solve this. Can't we just use AI to identify when AI has been used? And it sounds fairly simple. We found a few things, as did others who were, who were doing this work. To be clear, this was a lot of folks across industry who were trying to work on this challenge. We found that we built some pretty good detectors and we, we still have them today, as do other companies where you could run a video or an audio or an image through and it would give you some percentage of certainty as to whether that had been generated by or edited by AI. And that is an important component as we think about the challenges and what we can do about them. However, a couple things we identified kind of early on in this process. One is that if you get an 85% accuracy rating, that's, that's still not a hundred percent. Most of these classifiers and detection systems will not be able to give you that level of certainty. And so in and of itself, that is already a bit of, a, bit of a challenge. The other is, as you put detection systems out there, people who want to get around them will figure out how to get around them, right? They're really smart people on both sides of this. And there are ways that engineers will figure out, okay, this is how this classifier works. So I'm going to change my deep fake so it's not going to get caught by that classifier. And so there will, there could be sort of a sense of an arms race of creating a better detector and then creating a way to get around it. And it's hard to know where you are in that cycle. Are we at the part where we're better, or are we at the part where they're better? And so that makes. That makes this all a challenge as well as you're just. You can't have full confidence, I don't think, in whether these detection systems are 100% there. And then finally, there's just nuance to how people are using AI. It's not actually quite so black and white. What these detectors are very good at is if you have a wholly generated AI image and you run it through, they're going to give you a pretty strong indicator in most cases that that was AI. So there are some applications where it can be pretty accurate. Where the nuance comes in is, say, you edit a picture with AI in the slightest way. For example, there was a picture of President Trump, sort of an iconic picture after the assassination attempt, with his fist in the air and blood on his face, and he's surrounded by these Secret Service agents. A version of that photo began making the rounds really quickly, where the only difference was that the Secret Service agents were smiling, and that was trying to serve a narrative that the government had tried to kill Trump. And so the Secret Service agents who work for the government were smiling. And that, of course, isn't real. And we. We all knew what the other picture looked like, so you could pretty quickly debunk it just with your own eyes. But if you had been dependent on a detector to tell you whether or not that was a real picture or AI generated, you're going to get kind of a mixed response from those. From those detectors, because most of the picture was, in fact, real. It was only a small, small portion of it that was. That was AI edited. So these are the kinds of complications that we're all grappling with as we deal with the detection side of it. So, again, it's an important component. There's a lot of good work being done, a lot of good companies creating classifiers and detection systems, but it has to be part of a broader strategy, because in and of itself, these detection systems are just not sufficient. [00:20:20] Speaker C: Wow, that was really interesting, everything that you were saying in terms of thorough as well. I really appreciate that. Okay, so now I want to slightly change gears for a moment. And as I touched on before with Eugenie, as you know, there's an election coming up here in Australia, so is there anything that you can sort of share with, you know, the intelligence that you've just spoken about today and also based on US election that Australians may need to know. [00:20:46] Speaker A: Yeah, I think the main point I would make for you sort of everyday Australians when it comes to this is we really think that there's a, there's an important component of society in combating both nation state propaganda as well as deep fakes. And a lot of that is this like healthy skepticism. If you see something online that either doesn't seem right or seems too right, you know, like, man, this, this really feeds right into a, a belief, a core belief I have about this politician or about this organization. What we really need is to all develop that sort of critical skill where we stop for a moment and ask ourselves a few questions. How do we trust the source that this came from? Do we know where this originally came from? How possible is it that this has been manipulated in some way, whether with AI or just with deceptive editing? If we get to a point where as a society we are trusting but also skeptical, I think a lot of what sort of goes viral and a lot of the chaos that comes out of these kinds of campaigns, I just think that they'll, it'll lose its sting, it won't be as effective. And here's an example where we've seen this go pretty well. In Taiwan, they had big elections this year as well or last year as well. And we do know that China targeted them with some influence operations. We, we, we heard that there were some pretty compelling, in fact deep fakes. There was a candidate who dropped out and allegedly made a video endorsing another candidate that, that seemed odd to people, didn't seem like someone he would have endorsed, and in fact he had not endorsed him. It was a deep fake. But what we heard from the government in Taiwan, from NGOs there, and from people as well is that people for the most part just didn't believe the influence operations that were being thrown their way. They didn't believe those deep fakes that they came across online, even if they were technically quite good. And a large part of that is because they kind of knew it was coming. Their government had spent a lot of time talking to them, doing PSAs. Again, the NGO and civil society community was quite active there, making sure that people knew, hey, people are going to try and get you to think things that aren't necessarily true. You know, it's up to you how you want to vote, but give, give thought to what you're seeing. Be a little bit skeptical. And from what we can tell, it does seem to have worked. [00:22:57] Speaker C: So what was coming to my mind as you were speaking is, would you say, and based on that just example here, would you say that people, generally, people are getting a bit more skeptical around, oh, like that's definitely fake, that's AI generated. I'm seeing that a lot more in like Instagram reels. And then obviously it's fake. And then I go to the comments just to curious to know and people are obviously already calling out that. Would you say that people's discernment is getting a little bit better than perhaps in previous years? And I say that probably with contrary to, you know, people clicking on, you know, really terrible phishing emails, for example. [00:23:30] Speaker A: Sure. Yeah. Well, that's a great analogy, in fact, because, you know, there was a time when we would all click on those links and we didn't know that a Nigerian prince wasn't real. And we were trying to be scammed. Right. We got to that place of skepticism and awareness through frankly trainings and a whole lot of effort across society, both governments as well as employers and companies. And so we, we as a society have gotten more skeptical about phishing emails and we've learned the things that we should do. Scroll your mouse over and look at the URL. Never put your information into the site that you didn't go to directly. Right. That kind of thing. Similarly, I do think we're starting to see people get more skeptical of what they're seeing online, questioning things, because they're starting to become more aware of what the technology can do and they realize that it's not something that they can necessarily spot with their own eyes anymore. You know, four years ago, if you were looking at AI generated content, you could usually tell because their skin was really glassy or they had six fingers, or, you know, the, the lettering wasn't quite right. These were all cues we were given several years ago that are just not accurate anymore because the technology's advanced so much and, and the extent to which it's advanced in the last year, I mean, I expect we should see that compound in the, in the next six months. And so it, it isn't as easy anymore to spot it with your own eyes, but people are more skeptical. What I'm frankly concerned about actually is the other side of that skepticism, which is why I was trying to use terms like healthy skepticism. I worry a little bit that we get to a place where people just don't believe anything anymore. They don't trust anything they see online. Everything is, is AI and that's, that's not a healthy place for us. To be either. So we, we really need to work through trust signals and understanding. How do we get to a place where we have skepticism, but we still have sources that we trust and places we go to where we can get real content and information without just assuming everything we see online is fake? [00:25:23] Speaker C: Yeah, that's an interesting point. [00:25:24] Speaker A: 1. [00:25:24] Speaker C: Okay, so just on that note, then, and then your comment there, you said trust signals on what I'm starting to see on. I think Facebook and Instagram is, is actually taggy with like AI generated content. Do you think people are just exhausted, though, to be like, is this real? Is this fake? Like for the, you know, the everyday sort of person? And to your point, as a result of being exhausted, is that why they're just like, well, I don't believe anything on the Internet anymore? [00:25:46] Speaker A: Yeah, I mean, there's good news and bad news here. The good news is the companies realize that we, we need to figure out this labeling question of when do you label something as AI? How do you know that it is where we are right now is. Is not the right place with that? Because in some cases, you know, if I use a filter on my Instagram, that's AI, does it need to be labeled that I use that? Right. So I think we still have some nuance of what it means to. For something to be AI generated or AI edited. And I don't think we have that quite figured out yet. However, industry and, and governments are really working through these challenges and trying to find what are indicators of trust, what are consistent labels, what are the standards for how we do this. One really promising technology is this concept called content provenance. And it's an open standard that's run by this nonprofit group called C2PA. And what it is about is actually tagging with cryptographic metadata the origin and authenticity of an image, a video or an audio file. And what that means is if it's built into your camera at the point of click, it attaches information about where it was taken, who took it, and it signs it in a way that as you then load that picture onto LinkedIn, for example, which does read this standard and will give you a label, it'll then tell you what you want to know about that image. It gives you the context and if it was AI generated. So if you were to use just for example, Bing image creator to create an image, we apply this standard to that file. If you load that to LinkedIn, then when you scroll over, you'll see generated by AI as one of the indicators. So what we're all kind of moving towards is this idea of not is it AI or not, but just more context about where it came from and who it's from. Who stands behind this image? It keeps people from a brand perspective from stealing each other's content. So there's some real applications in the real world for, for why this could be used. But I think it'll be really helpful from a trust perspective if we can get to a place where we are, when, when we create something that we are tagging it as belonging to us in a way that we stand behind it and people will know that authentically it came from this company or this individual or this political candidate. [00:27:52] Speaker C: So going back to C2PA, is that what you meant before around generating like trust signals? Is that an example of one? [00:27:59] Speaker A: One example? I think there are other. The concept of trust is a really tricky one. I think there are other ways we can talk about trust. But Yes, I view C2 Content Providence as, as sort of a, a trust signal that we should all be able to get around as a society and start asking for, frankly, and, and then hopefully get to a part where that's just, just like the lock URL at the top of, at the top of your website. Hopefully we get to a place we're all looking for that seal so we know more about the image and where it came from. [00:28:26] Speaker C: I guess, like, to your point before, it does protect, like from a copyright perspective for creators, et cetera, because you often hear someone in the comments saying, hey, that was my video that you just used and now you've generated all these views, et cetera. So. Okay, that's, that's really interesting. So now I want to sort of flip over and maybe speak a little bit broadly around deceptive use of AI. I know we sort of touched on it before, but I wanted to, you know, get your thoughts a little bit deeper here. [00:28:50] Speaker A: Yeah, I mean, there's, there's a couple examples that we've seen it being used in and it's not always in the, in the political context, which I think is also helpful for people to consider. So a couple examples where it's not political necessarily, but we've seen it be quite effective, or at least we've seen adversaries really trying to use it effectively. One is with the Paris Olympics. So not surprisingly, Russia has an issue with the Olympic Committee and they have been working to undermine trust in, in that organization for quite a while. One of the things they did leading up to the Paris Olympics just last year was they did a series of videos One video in particular that was supposed to be like a Netflix style documentary, and it was called Olympics Has Fallen. And this documentary opens with the image of Tom Cruise and he appears to be the narrator. He, he did not participate, to be clear, in this documentary, but it appears that he is your, your host and narrator. And then throughout the documentary, his voice is the narrator. And that is of course not really his voice. It was an AI generated voice of Tom Cruise. Part of the mixing of mediums. You know, his visual of his face and then his, and then his voice being so close and clear, combined with, you know, obviously the way that they built out the documentary, it was quite compelling and it, it gave us a bit of, a bit of insight into if they want to do something bigger, if they want to do something more elaborate, what their skill set is at right now and what they could accomplish. So that's one deceptive use of AI that again is maybe political in nature, but not really about politics or elections. The areas that we're really seeing this harm show up today, though, as much as we say we haven't really seen it in elections yet, at least not in a major way, we are seeing this affecting women in particular. So just to be really clear, when we talk about deepfakes, more than 90% of the deepfakes that are on the Internet right now are of women and they're almost entirely pornographic in nature. And that is a whole other side of deceptive use of AI, where you're, where you're trying to use, whether it's a celebrity's face or a female politician or someone else trying to depict them in a place that they weren't. What is particularly problematic and bothersome to me, anyways, when I think about it, for women in public life, politicians, is the chilling effect that it has on women to enter into this space. It's already quite difficult to get women to run for office. I think that's a trend that we see pretty globally. When you add this additional factor, knowing that once they step into the public eye, this is a thing that is almost guaranteed, if not incredibly likely to happen to them, I just, I could see that becoming a real chilling effect, which is a real problem for our democracies. And then a third use of deceptive AI that we're seeing emerge in the real world and actually happening is around financial fraud. And so especially these voice deepfakes are being used to target vulnerable populations such as the elderly. They get a phone call, they think it's their grandchild, they need money quickly, they're stranded, they've been kidnapped, whatever the crisis is. And a lot of these folks don't know that this technology is out there. So it doesn't occur to them that that's not actually their grandchild. And just like phishing campaigns of the past where they insert a lot of urgency and crisis, a lot of people are losing money due to these scams. We're also seeing it at the corporate level too. There was a company in Hong Kong who, a, an individual at the company was having a face chat with their CEO who was telling them to wire millions of dollars urgently. They wired the money and it turned out that that was not their CEO. It was a, it was a live deep fake, obviously a very sophisticated operation, but a live deep fake of their CEO and they lost quite a bit of money that way. So these are currently ways that we're seeing this technology being used in a way that is quite detrimental and certainly deceptive in how it's being deployed. [00:32:32] Speaker C: So just going back to the second pointer in the pornographic specific to women, would you say that now we're going to see a reduction in women wanting to take, like you said, know, political roles or any sort of prominent figures because of that chilling effect? [00:32:45] Speaker A: I mean, I don't have any data to back that up. I can say anecdotally in conversations that I've had, I can, I can hear some of the anxiety from women who are in the political space around this topic. I do think that's an area that's sort of ripe for better understanding. And we're having conversations with women political leaders and other organizations to understand how their members are feeling about this issue. So I should be cautious to, to say definitively it's having a chilling effect. I think it seems quite likely that it could, at least at the individual level for some women. Whether or not it's, it's happening in sort of any trend, I don't know for sure. But I, I do think at the individual level there are a lot of people who are worried about this and see this as just yet another reason why it's not worth it to, to put your hat in the ring. [00:33:30] Speaker C: So I now want to talk a little bit more about you and your role. So as I announced, top of the interview, you know, democracy forward. So tell us a little bit more about this and what does it mean? [00:33:43] Speaker A: Sure. I mean, our team has been doing a version of this work for really the last 10 years. And what we really set up to do is work with election authorities, political campaigns, party committees, and then all of these other groups that sort of surround and empower a democracy. And so that includes the news and the media and journalists, but it also includes people who work in sort of academic institutions at times, NGOs, think tanks, ET cetera. And our, we have a few objectives when we engage with them. One is around protection for their infrastructure. This is a little less interesting. It tends to be, you know, not the topic of a lot of podcasts, but we want to make sure that when, when, especially when our technology is being used in a critical way around an election, that we are working closely with our customers around things like reviewing their infrastructure when, when they want us to, when we're invited in, providing them with recommendations, creating real connections and networks into our team, which is then a connection point back into the broader company. What we found is a lot of these organizations, why they, while they're well known and they, you know, they, they may seem like they're big just because they're, they have politicians involved in them or because they run the elections for a country, the reality is a lot of them are low resource and actually quite small, but they're really highly targeted. And so that's kind of how we define the groups that we work with, these highly targeted, low resource groups who are fundamental to democracy. And then we spend time with them trying to figure out how we can be helpful. And sometimes that's in cybersecurity protections, sometimes it's just giving them a phone number. So if something goes wrong or they have questions, they know how to find us. We do have programs and things that we run with them as often as we can. We make that free or at low cost. Obviously we have legal restrictions, sometimes working with governments, but that's our objective, where we can. So that's one of our key priorities is global elections, protecting and connecting with the folks who are in that space. And then on the other side of the work, we spend a lot of time considering what a healthy information ecosystem looks like and what Microsoft's role is in contributing to that. And so that includes again, a lot of the same players working with the news organizations around the world and also working with our product teams, you know, making sure that our colleagues within Microsoft News and within Bing and within Copilot have access to both our expertise, in some cases our networks, when they can help them with things. Sometimes we help access data sets for them that are relevant to the work they're doing. So we often serve as sort of the subject matter experts on these topics out to our product teams and try and Support them in that way. All of that work has been going on for many years, but frankly, in the last two years, a lot of it has been done through the lens of AI. Thinking about both, where can we help extend opportunities using this new technology to the folks who are in this space, but probably where we spend more of our time, unfortunately, is looking at, okay, how will people weaponize this tool? What are our obligations? Let's look around corners and try and anticipate how this might be misused and then work both with those communities, but then also with our internal teams to create the appropriate gating mechanisms, frameworks, protections, that kind of thing. [00:36:46] Speaker C: So, Jenny, just to build on that a little bit more, what about your team here in Australia? What are they sort of doing? Is it sort of the same sort of thing in terms of, you know, looking at how people are weaponizing this, or is there anything you can share? [00:36:56] Speaker A: Well, I'll start by saying, having spent a few days here already in Sydney with our team, I'm. I'm just so incredibly impressed with how they take on so many big issues that are important to both the company and to the country. How we've been working over the past few days in the lead up, or I should say the past few months in the lead up to this election is looking at a lot of the challenges that we've just laid out and how that specifically applies here in Australia. So we're working with our threat analysts and our threat intelligence teams to try and understand what do we think, again with this idea of looking around corners, not only what are we seeing adversaries do right now in any ways that it might impact the country, but also what do we anticipate we could see? It includes having meetings and conversations with the election authorities and political parties and news media organizations around Sydney and Canberra and elsewhere to really identify what I had just talked about. Where, where can we be helpful from an infrastructure protection perspective? What kind of cyber protections and, and programs and services can we offer? And our teams here on the ground are really the ones who are, have those networks and relationships built out and will help us back in the States with the execution on whatever that support program looks like. [00:38:05] Speaker C: So, Jenny, do you have any sort of final thoughts or closing comments you'd like to leave our audience with today? [00:38:10] Speaker A: Oh, gosh. Well, I guess I'd say you never know what's coming. You can plan for a million different scenarios and then it's the, it's the one you hadn't thought of, that, that really, that really surprises you. So when we think about how to build out a process of support and how we work with our colleagues in this space and how we work with voters and consumers, people who are out there who are heading into this election cycle and probably not thinking as deeply on these issues as we are, one of the things that I think is really helpful is to identify the areas where you can do something regardless of what the incident might look like. Right. So preparation is just so important when it comes to these kinds of, when these kinds of things. So that's, you know, it'll almost be repetitive really. But I look at what can people who are running elections do to create an environment that is quick to respond to crisis. I think election officials tend to be quite good at that. That's literally what they do. But then for the voters and for the people out there who are a bit confused about AI and a little concerned about the information environment, really just putting into practice this idea of, you know, pause for a moment, think about where things are coming from and why they're targeting you with them, and then just be cautious with how you proceed. Sharing information with others, I think that's about the best we can ask of each other. And what again, we're not going to know what's going to happen over the next six months or several months leading up to this election, but as long as people are thoughtful and prepared, I'm sure that the Australian election will go very smoothly and that you'll all be ready for whatever comes at you. [00:39:49] Speaker B: This is KVcast, the voice of Cyber. [00:39:53] Speaker C: Thanks for tuning in. For more industry leading news and thought provoking articles, visit KBI Media to get access today. [00:40:02] Speaker B: This episode is brought to you by MercSec. Your smarter route to security talent Mercset's executive search has helped enterprise organizations find the right people from around the world since 2012. Their on demand talent acquisition team helps startups and mid sized businesses scale faster and more efficiently. Find out more at Merckx Sec. Com today.

Other Episodes