[00:00:00] Speaker A: I do feel that password managers in the cloud, from a sort of operational security point of view, is a fundamentally bad idea. I mean, you've just got a huge target in plain sight sitting there that everyone is going to want to breach, because then you've breached many systems rather than one. So the efficiency for the attacker with these aggregate providers is enormous.
[00:00:24] Speaker B: This is kamikaze.
[00:00:26] Speaker A: I'll be completely silent.
[00:00:27] Speaker C: As a primary target for ransomware campaigns.
[00:00:30] Speaker A: Security and testing, performance sustainability, risk and compliance, we can actually automate that, take that data and use it.
[00:00:39] Speaker C: Joining me now is Graeme Nielsen, founder and researcher at Siege. And today we're discussing how the security industry ignores the halting problem. So, Graeme, thanks for joining me and welcome.
[00:00:48] Speaker A: Hi, Carissa, Glad to be here.
[00:00:50] Speaker C: Okay, so halting problem, I'm really curious to understand what do you sort of mean by that?
[00:00:56] Speaker A: That's a fundamental theory in computer science.
Around about the time of Alan Turing, when computers were first considered, he had the whole idea of a Turing machine, where you have some tape with some symbols on it. You manipulate the symbols, you have some memory and you output some information. There may be some transition within the Turing machine and effectively all computers, modern computers, phones, are Turing machines. And what he. What the halting problem states is that given a Turing machine, an arbitrary input, you can never prove whether any particular input, I.e. program, will ever halt or run forever. What that translates into, effectively is that you, if you consider, you know, protocols, programs you might write, you can think about the input to that program that's provided by, say, a user or an attacker effectively as a program, because any program you write on a Turing machine can be executed by any other Turing machine. That's one of its properties. So when you accept input and process it, you're potentially always running into the halting problem, depending on, I guess, how complex your language is, what you're processing. But effectively, the halting problem is the reason we have bugs in programs. It's a way of almost stating, you know, for example, if you have some loop, you won't know whether that loop will ever finish or not given arbitrary input. So you can test some inputs, obviously, you can test some variables and see whether that program will halt under those conditions, but you can't prove that it will halt or not halt for all possible inputs. So in terms of security, that means you'll always have bugs.
[00:02:39] Speaker C: Okay, so this is interesting. So let's get into this a little bit more.
So you're sort of saying people are ignoring the halting process, so therefore we have defects. Vulnerabilities bugs, etc. So then why, why are people ignoring it? Is it because it's like now we got to, you know, we got to ship stuff faster? You know, I was at a conference the other day and it was, that was the whole conversation faster than I mean, majority of people have ever seen. Right. So do you think. But even if we go back, even historically back at, look at like a, you know, computer science, for example, this was happening back then, now there's this getting more sort of holes everywhere.
[00:03:15] Speaker A: It's been understood for a long time. And I guess my contention is that the security industry as a whole is ignoring it in terms of, you know, vendors, people selling you security products, security solutions, even advice around how to develop programs properly. They people tend not to consider the hauling problem. They tend not to think about it. I mean, it's there in the background. There are some systems use formal verification allows you to actually prove what a program will do. Those use cases are pretty small, like space military, you know, critical systems. And they're not as flexible, obviously like a formally proved system is quite constrained. So I guess my point is not so much that, you know, computer science and people who use computers ignore it, but when people are giving you advice or trying to sell you security solutions, I would say they ignore the halting problem. They, they imply that you can give better security by say, buying their box sticking in front of your box. That box processes all the bad stuff and you're safe. What the halting problem would tell you is actually you've just another full attack surface in front of your attack surface, neither of which can be proven to be secure and therefore there will be more bugs, there will be more issues. So I feel they're being a bit disingenuous. A bit, you know, there's a pretense that, you know, users are to blame for security instances or developers are to blame for writing poor software, whereas in fact those people are powerless. This is something fundamental about, you know, what computers can, can and cannot do. There is no way to change that.
[00:04:49] Speaker C: Okay, so I want to talk about users now for a moment. When you and I spoke before the interview like a week ago, you sort of shared a couple of things with me, one of which was awareness programs. So you, I think a gangster is probably a strong word. Maybe you are, but tell me more about this because, I mean, I've had people really pro awareness programs. I've had other people say absolutely not, but I'm really keen to hear what's in your mind.
[00:05:15] Speaker A: Awareness programs, well, Again, I feel that's a little bit of, you know, email phishing is a good example of this. You have a protocol which, you know, at the time it was designed was fine. Share information, send emails.
Now, of course, there are security implications of accepting emails, reading emails, people trying to con you phishing, and those are not to try to train people to not use email and computers and the Internet as designed. So I do not click on links. Seems to me ludicrous. The people sending phishing emails are doing what people have done for centuries. They're trying to corner people. They are playing with people's psychology.
There's lots of contextual reasons why people might fall for those kind of emails, those kind of cons, but people fall for those kind of cons on the phone or they fall for them in the street or when they meet people to try to train people to fix some fundamental psychology of human nature.
It shows. I mean, how long have we had security awareness programs and phishing training? Does it work? Has it stopped phishing? I would say no.
[00:06:19] Speaker C: Okay, this is really interesting. Okay, so you're right, we've had it for years. I mean, I've worked in companies before and I'm like, this is an awful training program. The other thing is as well, I mean, I was speaking to a cio, I don't know, maybe nine years ago, and they're like, yeah, but like, we do these things, KB but like, no one actually explains it, that it's like, don't do it, rather than like explaining it. So what do you think needs to happen? Because at the end of the day, like you said, these people are going to keep doing these phishing emails, are going to keep trying to con people out of their money. They're going to keep doing it. But then everyone, and I hate to use the term, the whole awareness thing, people say, oh, you know, the awareness, the awareness. So then how do we overall try to reduce people from getting scanned, conned, whatever, when these things are going to keep coming and, you know, maybe some of these trainings are not effective, but like, talk me through it.
[00:07:10] Speaker A: Well, I mean, the fundamental problem with phishing is that the, you know, sender verification, like, how do you, how do you know who sent you the email? I mean, that's a fundamental problem of email. It's not a fundamental problem of people. It's a problem of email. And there are plenty of nowadays secure messaging systems around that we could use.
You know, the fact that all businesses use email store, it's convenient. It's historical. We have like, you know, lots of sunk cost in email. But using a communication medium that doesn't allow verification of sender is, again, seems foolish. I don't know, maybe I'm being naive, but it seems obvious to me. I mean, why, you know, okay, I understand people aren't going to. You can't just stop using email overnight. But if you are wanting to communicate with people in a secure fashion and not be subject to those cons, I mean, you're effectively allowing everyone on the planet to, you know, directly contact you effectively and try to con you. So. And for businesses, it's not sustainable. Surely, you know, all the spates of the targeted phishing emails against, you know, CFO is when attackers know the CEO is traveling, you know, and asked to transfer money for some deal that is, you know, not yet signed. So don't talk about it. You know, we're all aware of all these targeted attacks like that.
[00:08:29] Speaker C: Okay, I want to keep following this because I hear what you're saying and I want to want to explore this. So. Okay, so sometimes when I'm emailing people, because obviously we're in media, we primarily email external people all day. There's a lot of people, when I see their email comes back to me saying, like, this email is an external email. Be careful, all those sort of things. So do you think, like that's enough or like, I know it's sort of. I know it's very what you're sort of saying, but at least it's, it's giving. Like it's. Some of them are quite like highlighted in yellow red. Like it's quite bold in your face. Actually annoying sometimes to read it. It's a bit distracting. So like doing that sort of in some way helps solve the problem?
[00:09:06] Speaker A: I don't think so. I mean, the way email works, you're always emailing people outside your organization. I mean, that's one of its fundamental reasons for existing, isn't it? I keep getting missed. Every time I send an email to an external organization, I get a warning. So I just don't see the warning anymore. I mean, my brain just turns it off. It's just part of the page rather than. I'm not proposing here that we necessarily try to re engineer email. I just think, I guess the awareness program should really be about understanding what email can be used for and basically how you cannot trust any email. So internal email, sure, but external, I think you need other processes in place to ensure that communicating with external parties is not putting you at risk. So, you know, you understand the Limits of what the email can tell you in terms of who you're talking to and who they might be or where they might be. I think that would be a better approach.
[00:10:01] Speaker C: So the reason, okay, so where my mind's going is because what you're saying makes sense. Right? So, for example, travel on a plane, right. Should it be up to me as the traveler, the customer, the passenger, to be like, oh, you know, is this plane secure? And look at everything that's happening in the news at the moment, is it on me? Because I'm not an engineer and you know, and do have anything to do with aviation, so why should it be on me to check, like, is this plane going to be okay? So I'm using that as an example of how sometimes in security we just think, oh, well, you know, Graham should have known because he did the awareness training. And I just think it's not these people's profession. It's not probably what they're interested in yet we're still trying to hand over the blame to be like, well, the user didn't think it through, therefore it's their problem. So I just, like, we've just seen this over the years and then we've tried to, you know, patch it with technology and more technology and all the things and it's like, well, it's the user's problem. So I've just, I've done so many of these interviews and I'm just trying to really get a gauge on, well, what do we do moving forward? Because it, it is unfair for people to know all the things about cyber security. Right. Or else that's exhausting. Or else we'd have to know everything about construction and, and cars and vehicles and all of these other things. Right. So how is it fair?
[00:11:15] Speaker A: Absolutely. And I think maybe, you know, as computers become more ubiquitous, more embedded in everything and less a separate entity in itself that you use, I think maybe we need to move more to dealing with these security issues, you know, analogous to, say, safety. Because as you point out, I don't know anything about planes. I shouldn't have to know about a plane before I get on a plane. I buy lots of electrical equipment, electric, you know, electricity is dangerous, yet I don't need to know anything about that to use devices safely. I feel that should be the same approach for computers. You know, you shouldn't be, yeah. Having to prod yourself, stick a fork in the wall in your socket to become aware of the dangers of electricity, which is effectively what anti phishing or security awareness programs do. I Think so.
[00:12:02] Speaker C: Why haven't we gotten to this point? Because it's not like computers were invented yesterday. They've been around for a long time. We get a lot of smart people out here. Do you think that, okay, so do you think that potentially this is a theory? Do you think that companies out there are like, well, if I kind of sell the thing, I'm distracting you from fixing the, the root problem because I can sell you this other thing which is to plug the gap which was there already. Or what do you think's happening here?
[00:12:28] Speaker A: Well, I think the technical debt on the Internet obviously is it was built as a, as a sharing network. You know, you can think of it as a giant copying machine if you like, you know, universities initially, I mean obviously from DARPA and military, but initially university network. And I guess, you know, having grown up as the Internet came into existence, seeing how it went from a sort of network of sharing and knowledge to one where quite rightly people were like, well hey, you know, we can use this to kind of a business, I can make some money, I can provide a service. But a lot of the protocols were not designed with security in mind. So there's certainly a lot of backwards fixing of even for example HTTP. You know, there's no idea of a session in HTTP. So the whole all business on the Internet using HTTP is a kind of retrofit of security to kind of allow those things to happen. And again, obviously you can't just fix everything and turn it all off and change to a new protocol. It's not feasible. We do have to start addressing these issues quite soon. I think the longer you wait, the harder it's going to get.
[00:13:35] Speaker C: I'm known for being direct.
Lets be honest, nobody gets into technology leadership for the compliance paperwork headache. But if you're building or scaling a tech company, security frameworks like ISO 27001, SOC2 Essential 8, CPS 234 or GDPR aren't just tick box exercises, they are business critical. That's where Vanta comes in. Vanta automates up to 90% of the work for security and compliance, helping you get audit ready in weeks, not months. It integrates seamlessly with your tech stack so you can spend less time chasing documentation and more time leading innovation. If you're a cto, CISO or head of security, it's worth taking a closer look. Visit vanta.com kbcas V-A-N t a.com kbcars to learn more.
Yeah, okay, so and look like the Internet, right? It's, it's really built of like sticky tape, duct tape. Like it's, you know, it's not.
[00:14:38] Speaker A: I'm surprised it works most of the time, to be fair.
[00:14:40] Speaker C: It's just one of those things that people nowadays just don't really think of. Like I can just go on the Internet. They don't really think about the mechanics, how it's built, etc. Right. So, so how do we look? This is going to be a hard question but like how do we get to the point where, like you said, we've, we've sort of just kept building stuff on stuff and it's a rickety sort of bridge we're walking on at the moment. But like it's going to be hard to knock down the whole thing and rebuild it. But what do we do to make it, you know, reinforce it, make it stronger?
[00:15:07] Speaker A: For me, the line is, as we discussed just earlier, that as computers become more ubiquitous, embedded in everything, you know, and smaller and everywhere, and once computers are interacting with the physical world effectively or influence the physical world, so you can cross that line from stealing money on the Internet or defacing websites or stealing digital information is one class of problem. But once you have all the potential security issues that we have in the digital realm, then being able to be exercised in the real world as well in terms of, you know, smart cars, smart construction vehicles, all the IoT devices, you know, supposedly smart devices in your home. I think at that point, as this is happening, I think now is where we have to start drawing some lines in the sand. And I think those kind of devices that interact with the world have to have much stronger, I guess you would term it safety rather than security. I think that might be the way to term it. But it is security, but looking at it through a safety lens. So trying to persuade vendors, governments, people to think about it like that may get some traction in terms of fixing these things.
[00:16:19] Speaker C: Okay, so I zoom out for a second. So quick update. I have now Google just to give you some insight. The first standalone electric toaster called the Eclipse was made in 1893, well over 100 years ago. And it looks pretty dodgy. You can look it up after you, you know, it's, it looks something that you're going to 100% burn your hand on. Something's going to happen. Right, but look at the toaster now, right? It's completely, I mean look, unless you're putting like a knife or something in the table, still wants turned on. But other than that, it's not like the one I'm looking at so the reason why I'm bringing that up is are we going to have to wait another hundred years before, I mean, you and I won't be here to have that conversation. Of course, but do you think we have to wait that long? To your point around, the computers are going to get smaller, they're going to get better. We're not going to have these sort of fundamental, flawed problems that we've had historically over the last 20, 30, in 50 years when, you know, some of the first sort of computers started to come out.
[00:17:13] Speaker A: Well, that's my hope. I think it'll be quicker than the hundred years. There's some really pressing issues coming up at the moment, particularly around credentials. You know, look at breaches and credentials. And I did some research recently where I was going on dark forums and seeing what data is available and there's a recent, I think it came out today, actually, research on a people discovering infostealer logs and how many actual username password pairs have been breached. You know, and we're into the, we're into the billions. So once everyone's credentials are compromised at that point I kind of feel maybe we are freed up to fix things as being a little bit flippant. But you know, it used to be that attackers would brute force credentials and now they just have them.
[00:17:57] Speaker C: But we must be getting close to it and because like how many breaches, Like I've been in multiple major breaches, like multiple times. Yeah, so this is where it gets. So, okay, so then I've got. Okay then I'm curious to understand. Right, so all these breaches have been happening especially here in Australia, like 2022. There's a fair few of them. So then there were people online because I like to do research as well and see what just like the average person sort of saying someone's like, oh well, who cares? I was in the first, second, third breach or whatever it was. So then I've asked people on the show, do you think they've become desensitized, people have said yes, etc. But then do you think it's going to get to a point where it's like, well, no one cares, my stuff's out there. And then therefore we're even in a worse position as a security industry than before because no one cares. Which means like businesses, yes, they're regulated and all those things and they're going to get pinged by the government to some degree, but they may have less impetus to want to do anything because they're like, well, no one really cares anyway. So it's sort of like we're going. It always feels like, are we going backwards? Because no one seems to care that much.
[00:18:54] Speaker A: I think that caring from users is simply powerlessness, though. I think their only option is to not care because the breaches are happening and they have to use these online services. Now to your life admin, I mean, you can't not have online banking, you can't. Health is going on, you know, is online. I mean, interacting with the government is online. You can't avoid it. So they. And they are not the ones that are being directly breached.
It's all the services, the databases that being breached. So the user themselves has basically very little power, I would say. And so I think that they're not caring. A simply, I can't do anything about this, so why care? What can I possibly achieve by caring?
[00:19:37] Speaker C: So do you think companies are thinking about that? Well, we made a mistake and. And then the people are powerless. And I get that. Like, you're right and we're forced. And that's why I keep saying to people, like, they're like, oh, you know, my privacy and all these things. I'm like, yeah, but if you want to operate in today's society, you're going to be on the Internet, which means you have to give up potentially being in a breach, giving up that risk and loss of privacy.
[00:19:58] Speaker A: You have no idea of how that company is storing your data or how it's using it. Again, it comes back to the user, really shouldn't be the focus of security. It needs to be the systems that we're using. Businesses, I feel, are simply. Sometimes businesses care about security because of the people in them or the particular nature of their business. They're forced to because of compliance, potentially. But there's plenty of companies who I think take this sort of wildebeest approach. We're running around crossing the river out in Savannah. As long as we're not the oldest or the slowest at the back, then there's enough targets that the attackers will get someone else and we'll be fine. We'll have our business and we'll do our exit before any devastating events happen.
[00:20:39] Speaker C: And would you say that's sort of the general mindset at the moment?
[00:20:41] Speaker A: Oh, it's varied enormously. I mean, again, it all comes down really to the people in those companies. And that can change as those people change. I feel companies security profiles change. Look at Microsoft. Microsoft, for example, you know, that started off very, very weak in security.
They suffered a lot of breach sharing A lot of exploits for attackers. Then they had to take security seriously and took it very seriously, and then improved enormously. And then recently I feel they've been kind of going the other way again. You know, I think it requires constant effort again, because the systems are using the halting problem to maintain their security.
[00:21:15] Speaker C: So what do you think? Okay, given what you do and your experience doing a lot of research, what do you think people like people, as in company, what do you think they're most upset about at the moment? And I want to use that sort of word because obviously I'm on, you know, social reading what people are saying they're frustrated with a vendor or this or users or they can't get enough money or, you know, there's all of the things. But what is it Would you sort of attribute to people's main frustration at.
[00:21:41] Speaker A: The moment, Business at the moment, because of the. The sort of political situation in the world and what's going on? There's an enormous amount of political denial of service going on. I call it political where there's no ransom request, there's no communication. It's not a. They're not even necessarily public infrastructure that's being attacked. It's simply denial of service to attempt to damage business. What I'm hearing at the moment from in Australia and New Zealand is just a huge amount of that causing impact to them, basically.
[00:22:15] Speaker C: Okay, so then may. Okay, so on the user side, then I was like scrolling through Instagram reels and there's this random. I couldn't. I didn't save it. And I just. It made me laugh so much. There's this woman in her car completely almost having a meltdown. She's like, I'm so sick of using multi factor authentication. I just can't do it anymore. I'm over it. I'm sick of it. Like, she was really like raging hard.
So, you know, I get it, right? Like we're trying to be secure and trying to do these things. But then obviously it just annoys people. People just want to log in. They don't want to do the whole, oh, I've got to go to my Authenticator app and do all these other things, right? It annoys them. So I would say that's probably maybe the main consensus of people's frustration from a user.
[00:22:53] Speaker A: I was considering that slightly the other day. I spend an inordinate amount of time logging in, constant, just all the time. And it is very frustrating. And because I'm a security person, everything's a two factor because I. It has to be, as we talked about brutes, you know, your, your credentials are out there. So that's definitely a frustration. And again, that's possibly why people are sort of turned off by security or talk of security or being more secure. They just feel it's just going to cause them more frustration and more effort, which is a problem as well. And I think the sort of implementation of Google and Apple where they allowed you to have password keys rather than passwords, I think that was a slightly missed opportunity. It seems that the implementation of those wasn't perfect. Well, it wasn't ideal. And they're attempting to lock people into their little ecosystem because I think passkeys would be a good solution for the password problem and multifactor authentication and clicking on motorcycles or buses, especially when the machines are better than humans at the captchas these days.
[00:23:52] Speaker C: So totally get it. So, okay, so then, I mean, passwords infuriate me, right? Like, it's like, oh, you shouldn't have the same password. It's like, yeah, but it's like some site, something I need to log into, they don't really care about, right? And then it's so hard to remember. Okay, so then people would say, okay, well, Chris, you can get a password manager. But then what gets me about password managers is that as you've seen, like, they've just been breached anyway. So it's like I'm paying for a password manager for a service to help me with not remembering all my passwords and then it gets breached. So what's happening?
[00:24:23] Speaker A: Well, the whole thing problem's happening. All the protocols, all the languages we're using are. Their complexity is not being considered and therefore we are introducing bugs. Because all these breaches are. The people building the systems don't want to be breached. Obviously these are unintentional bugs, unintentional security bugs due to the tools that people have used to build the products, because they can't, you know, it's software art, it's not software engineering. We call it software engineering, but there is no, there's no proving of anything. It's simply how the developers decide to write the code. I do feel that password managers in the cloud, from a sort of operational security point of view, is a fundamentally bad idea. I mean, you've just got a huge target in plain sight sitting there that everyone is going to want to breach, because then you've breached many systems rather than one. So the efficiency for the attacker with these aggregate providers is enormous.
[00:25:16] Speaker C: Then the other thing, which was really interesting and I Know, we've covered a lot of ground already, but one thing that you raised with me was we have all this technology, we've got more and more vendors that we've ever seen ever before. Right. But then, you know, as you've rightly pointed out throughout this interview, you know, there's more breaches, there's more vulnerabilities, there's more defects than ever. And I know we've discussed about the halting problem, but outside of that, how do we have more money being invested into technology than before and then we still have more issues than before?
[00:25:46] Speaker A: Well, that's a good question. The more code you write, the more bugs you have. That's just a fact. No one's, no one's writing perfect code, no matter how good they claim to be.
[00:25:55] Speaker C: I don't want to let this thought leave my mind. Do you also think it's because, like, lower barrier for entry now, like you don't really, like you can just. With everything now with AI and stuff like that, like people are just shipping code and who are not like a developer quote, unquote by trade. So therefore we've got, you know, the, this code that's being written isn't the best, isn't the most secure or whatever it is, but you know, and then it's being pulled from open source repos. So it's like, well, do we really know what's in this sort of code? Or do you think there's just more problems than before, than just Graham developing the code securely, then he ships it? Like what? Talk me through that.
[00:26:28] Speaker A: Yes, and I think the inherent complexity that's hidden from you to some extent of when you write software now where you're importing other people's libraries, which import other people's libraries, you know, sort of ad infinitum. I think that the interaction and the complexity of those things leads to bugs. A lot of bugs are in the interfaces. So parsing of data, the passing the communication of data and obviously that's happening a lot more. And I guess we used to write code that was first programs were machine code. Like that was the code. The code was the code you wrote bit for bit. Then we moved to assembly language where you're almost writing machine code, but not quite this abstracted. And then we have object oriented languages and declarative and now AI writing code. I kind of feel. So all through those steps we are abstracting away from the actual bytes running on the cpu, there's more code interpreting your code, if you like. So that's again there's more opportunity for bugs. I have a horrible feeling that there's a bit of a tsunami of AI generated code bugs that we haven't quite seen yet. I suspect that might happen in the next year or two, depending on how successful people are actually shipping product. If they're, you know, just using AI to write code, I'm doubtful that that is completely doable. So at present, so we might be saved from, from that.
[00:27:53] Speaker C: But then would you say as well, people could sort of hit back and say, well, you know, you've got things like, you know, S bombs. So you can look at the transparency, the compliance, like you can go through with a fine tooth comb. You know what I mean? Like, do you think people would respond that way to be like, well, you can sort of look at the, you know, detailed sort of list of the ingredients in this particular code.
[00:28:13] Speaker A: You can try to look at the list of ingredients in that code. If it's the same ingredients tomorrow, that that would be good. It may not be the same ingredients tomorrow. You know, the ingredients may have changed slightly. And if you've ever looked at an SBoM for a significant product, I mean, how are you going to assess whether the vulnerabilities within the SBoM are actually going to be exercised by your code? So, for example, if you have an SBoM with a whole lot of vulnerabilities in some libraries, there's no guarantee that you're actually, you know, touching those code paths. Those vulnerabilities are relevant to you and it's very hard to determine that. So S BOMs are to some extent put in the same category as sort of lists of vulnerabilities that, you know, people scan themselves, have listed vulnerabilities. They have an SBoM with some vulnerabilities. They put them on a risk register, they determine the likelihood is low, they kind of polish them and put them in a drawer. I'm not sure lists of vulnerabilities and generating more lists of vulnerabilities is particularly useful. I don't think we have a problem with finding vulnerabilities or, or exploiting vulnerabilities. We need to get to the causes rather than, you know, looking at the symptoms all the time.
[00:29:21] Speaker C: So what do you think needs to happen, like, long term? So ultimately, you know, we discussed a little bit of history, what's happening, some of the issues, but how do we sort of get. And I mean, there's no like, perfect place. I understand that, I get that. Right. Like, even if you look at cars at the minute, you've got people who are so pro, like no, I'm never going to, you know, electric vehicle or evil and then automated cars and all these sort of things and you've got your hardcore. No, I drive this style car with diesel for example. So what do you think that needs to happen in the tech space, the security space? Do we just keep going as we're going or do you think there'll be an inflection point? And what I mean by that is even when to some degree, when OpenAI really launched ChatGPT hard in 2022, that was a bit of an inflection point with industry. Right. So people sort of change their mindset. Do you think something like that will come along which will change our overall thinking or what? What do you think needs to happen here?
[00:30:15] Speaker A: I feel now that it's going to be a little more like maybe aviation industry was, you know, a lot of people died in the early days and when I say early days I mean, you know, 60s, 70s for actual aviation safety to improve. So I'm thinking that maybe computers and security is something similar. As we pointed out, computers are driving cars or health devices potentially or with AI generated code that maybe, maybe a lot of people get hurt. And that's the inflection point. I'm kind of slightly pessimistic about humanity's ability to actually change things for the good of all. You know, we're all quite self centered at the moment. So if it's not happening to you, as we've said, people aren't too concerned.
[00:30:58] Speaker C: So what do you think people are concerned about? Now when I asked you that question, what I mean by is this so, and I mean this is just a general sort of view, like at the end of the day people are concerned for themselves and then that, that extends to, well, I got to keep my job, I got to hit my KPIs, got to make sure we're hitting our targets, you know, got to make sure we don't get in a breach. Because we get in a breach, oh, I'm going to hit my KPI and I'm going to give my KPI and he does come on holiday, you know what I mean? Like there's all these things. Right, but what would you sort of attribute are the main concerns though to. Because people, if they don't have to do something new or whatever, majority of people just don't want it. Right. And I get that and there's risk. But what do you think people are worried about? Do you think we'll get past that? Point then as an industry as well, until you said something catastrophic occurs, like people just lots of people start dying, which could happen. Where do you think that sort of sits now in terms of people's mindset? Tell me what you think.
[00:31:53] Speaker A: Okay, good question. If you know someone, for example, you've met someone who has kind of due to some breach, has been scammed, actually lost money or been affected some way, they are very concerned. Whereas other people I think are fairly. Fairly. Yeah, I guess there's nothing to worry about. But it's a little like risk assessment. You know, everything's unlikely until it happens. And then when it happens you're like, oh shit, that's really bad. You know, and you knew it was going to be really bad if it happened. But until it happens, it's not really sort of viscerally real to you. You don't consider it in the same way. So I'm not sure. I think users, as you pointed out, I think they're more frustrated with security unless they've actually been conned or scammed or you know, had had some impact as a result of a breach or if your email is compromised, for example. Again, it's a very emotional sort of effect on, you know, like somebody burgling you is quite emotionally hard. Someone's been raking through your, your information or your physical stuff. There's a very different concern and assessment after that event as opposed to before. Yeah, I think most people find security a pain in the ass and wish it would go away. They don't want to log in, they don't want to have to do captchas. They don't, you know, it just becomes, it stops them doing their job. It's a hurdle. Well, as I say, unless they've been, you know, unless they've had an event in which case they are typically have a much different view of the world and security.
[00:33:14] Speaker C: Well, just going back down on passwords for a second. I mean again I'm not, I can't deal with passwords as a consumer so I was always curious. I don't work in big corporations. In the past, like you'd see so many people down at the help desk like, oh, I can't log in, I've got to reset my password. And I was curious to understand. I wonder how many hours of productivity and resources have been spent on it. Help desk helping people reset their password or someone can't log in to do something because of a password. In terms of hours, resource money of just trying to get into a system.
[00:33:46] Speaker A: To do work, I Think if you could solve password problems, you'd probably be very wealthy, Carissa. It's a fundamental problem. As I say, I think pass keys were good. I think they've just not been implemented well. They've been implemented to, I guess, be a differentiator for the business and they want to lock people into the ecosystem. So I think if we had a kind of open.
Well, I mean, we have an open standard, but if we had a sort of open implementation pass keys, I think that would. Because you want the machines to, to do that authentication for you. As we all know, humans are terrible at remembering complicated passwords and why should they?
[00:34:24] Speaker C: Right. Like you had been saying.
[00:34:25] Speaker A: Yeah, it's crazy. And all you're doing is nowadays with passwords managers is basically making the machine generate it and then copying and pasting it. So passkeys should be what we're doing. We just need an implementation of them or a way that makes them force vendors to make it easy to transfer those, I guess, from service to service. Although again, I guess that provides attack surface as well. So this is the. There's always some conflict between business and security as well, I guess in that respect.
[00:34:55] Speaker C: So then find this a little bit more. So there's people out there, I've been speaking to their theory and I mean it depends on who you talk to. And I mean, I'm speaking to certain people at all different levels and what they're talking about is always interesting and important. Now people are saying the fundamental security issue, as is at the identity level. Would you agree with that?
[00:35:14] Speaker A: Definitely, definitely. I think the trying to differentiate between, you know, humans and bots. So identifying who you are, I think is absolutely fundamental. Yeah, and this is where, this is where there's a lot of tension for me anyway, where you, you want a system that you don't necessarily. Well, you don't want necessarily a single identity and you don't necessarily want a, you know, a national identity that allows surveillance.
So you need some kind of multiple identities are, are nice. I mean, even with different emails and passwords, you know, if one is breached, it's not devastating for you. Then like people already have multiple kind of identities. And as you even stated yourself, you know, you have some services that I don't even care about that it's not a, you know, your identity on a, a recipe site is different from your identity on the bank and that's a good thing. So there's a problem with folding all those into having one identity, which I think we should avoid because when your one identity is compromised, then you're fucked, to put it mildly.
[00:36:16] Speaker C: True, true.
[00:36:17] Speaker A: I think multiple identities that also preserve privacy, that's where I feel we should be going. That's where the problems are. That's where you want to solve. That's why I would like to solve.
[00:36:28] Speaker C: So where would you say as an industry we are in this identity journey? Right.
[00:36:32] Speaker A: I don't think we're anywhere, I think we're just all over the place. It's email, which you know, as we pointed out, isn't the best, most secure system. Email is what's used as identity at the moment and it's okay, but it's also very. When it's breached, that's it. Everyone gets their password resets it to their email. Now there's a number of systems I've noticed that even rather than having any kind of password, when you log in, you simply get a token emailed to you and you then use that, you know, but all that's really doing is pushing the identity problem to your email, which isn't necessarily that secure. I mean it all depends on what you're using. Strong your passwords are.
[00:37:10] Speaker C: But then the other thing is just to extend that more. Then there's machine identities, Right. So now that's another problem. So then how. So we've got like the physical human identity and then we've got machine identity and there's thousands of them to the point where people don't even know what machine identities are in their environment. Right. So then like then we've got another problem.
[00:37:28] Speaker A: In cloud service providers, there's a lot of confusion when they write software around confusing the, say the phone and the user. And what I mean by that is they assume, for example, if you want to have an Uber account, you know, they link it to your phone, you can only use the app on the phone or whatever the phone is. The identity.
[00:37:50] Speaker C: Yeah, that's annoying as well though.
[00:37:52] Speaker A: Yeah, the phone is identity and they don't actually understand it. You can create lots of other identities that are not actually phones, but use the API in a similar way and look like phones look like people. So there's confusion there around that. And a lot of people, I guess their whole life is in their phone to some extent. I guess they are their phone, but it's not necessarily unique or non divisible.
[00:38:12] Speaker C: So what do you think? Like if you were to hypothesize with their identity, like do you think we'll just get, do you think when we're moving forward this can sound so bad but like it's all I can think of right now. When we're born, like, is it like we're gonna get like a token? Like, here you go Graham, here's your blue token and this is your identity just with how the world works. Right? Because like, even with phones it still gets me. Right? Like people can easily just port your number. Like SIM porting. Like, how easy is that?
[00:38:36] Speaker A: Yeah, that's it.
[00:38:37] Speaker C: And no one's thinking about fixing it?
[00:38:40] Speaker A: No. Again, because phone companies have to allow you to port your SIM across providers is what they would claim. And again, that's the reason I think that's a problem, isn't. I don't think the solution to that's technical. I don't even think the problem is technical. The problem is people help. Desks are there to help. So why would you be surprised when someone phones up and asks for help, they give you help. And again, blaming the help desk or you know, the people who are doing the SIM port, again seems to me a bit disingenuous. It's, you know, that is their job. It's what they do daily to ask them to sort of spot someone trying to port SIM illegally. There just needs to be a different process for porting SIM.
[00:39:28] Speaker B: This is KBCast, the voice of cyber.
[00:39:32] Speaker C: Thanks for tuning in. For more industry leading news and thought provoking articles, visit KBI Media to get access today.
[00:39:41] Speaker B: This episode is brought to you by MrKsec. Your smarter route to security talent Mercset's executive search has helped enterprise organizations find the right people from around the world since 2012. Their on demand talent acquisition team helps startups and mid sized businesses scale faster and more efficiently.
Find out
[email protected] today.