Episode Transcript
[00:00:15] Speaker A: Welcome to KB on the go. And today I return to the Tech Leaders 2024 conference here in the Hunter Valley, 2 hours north of Sydney. Techleaders is the premier network and education event for journalists and technology vendors in the it and cybersecurity sector. Running across the two days, the event brings together government representatives, industry analysts and technology journalists to hear from technology companies. I've lined up a few guests to appear on the show today, so please keep on listening.
I'm joined now by Chris Difley, senior director for client security from Optis. So Chris, thanks for joining and welcome.
[00:00:45] Speaker B: Thank you. Pleasure to be here in the sunshine of Hunter Valley.
[00:00:49] Speaker A: So just before Optis announced an update. So maybe talk through what is that update? And then the benefits of it. Let's start there and then we can go into a few other things after that.
[00:00:58] Speaker B: Yeah, sure. So, yeah, today we announced that we're launching the Optus managed threat monitoring service, which is powered by Devo technology. Very excited about this platform. Whenever we sell to our customers, be it a contact center, be it as SD WAN service, whatever it is, we want to be able to say that we can back that up with the absolute best in class cybersecurity. In the back end, it will be the right technology, automated leveraging the latest analytics engines using AI and ML to really put the focus more on the technology, looking through reams and reams of data, rather than relying on human beings to search through a lot of events.
[00:01:43] Speaker A: So just go back to the AI component now. Obviously now we're seeing a trend in the space with we are combating AI in terms of, you know, the defense side of it because cyber criminals are using AIH. Where do you sort of see this going now moving forward? You've sort of touched on a little bit throughout your presentation, but maybe if you could sort of elaborate more on. We are seeing this trend, but then what does this mean now moving forward with, you know, how AI is going to be used in the future?
[00:02:07] Speaker B: Yeah. So it's so embedded into this system. And we ran a POC, a proof of concept for about 18 months to make sure we chose the right technology partner to give us that benefit and to be able to meet those AI challenges in the threat landscape head on.
[00:02:25] Speaker A: What do you mean by AI challenges?
[00:02:26] Speaker B: We've seen trends within the cybersecurity landscape. There's a lot more state actors that are causing the threats. There's a lot more technology being used to influence how those threats come in and we've seen a lot of compromises across the australian landscape as well. You can't tackle that with human beings. You've got to have the right technology investment.
This platform comes with an inbuilt SIEM and saw capability. So not only do you have your security information, event management, but it's also the same platform, low code, no code, cloud native platform that my technology guys can use. That gives us that ability to orchestrate and automate defense against that. So to give you an example, Devo platform comes with a 400 day data. Its analytics engine is really, really strong. That's what we're partnering with here. And that amount of data allows us to see some of the more complex, longer term threats that we know is in the landscape there. So research shows that threat actors can be in compromised organizations up to 200 days. If you're only looking at the last 90 days worth of data, you're not picking up on some of that intel that's happened previously. So once you've got that longer term view, you can start to work out that there is threats taking place across a longer time period. That's just one example of the technology that it uses.
[00:04:05] Speaker A: Just go back a step. You said before the analytics engine being strong. What makes it strong?
[00:04:10] Speaker B: Look, it's cutting edge technology, Cambridge, Massachusetts, spilt and. But it's three things that they ingest in there to make it work for us. There's the use cases out of the box, which are really strong and based on actual breaches around the world globally. There's a lot of regional use cases that we bring to bear, not just within Optus, but within our regional partners in the Asia Pacific region via Singtel. And then on top of those use cases, we have the AI and the ML engine that's constantly evolving and learning what it needs to not have human beings having to tie the links together all the time, using that technology base to throw up scenarios every day. We're good in my teams at threat hunting, we really invest a lot of time in that. When we find those behaviors, those anomalies, we like to then orchestrate them and automate them. We don't have to go looking for them again. We can then put them in a dashboard, put them in the tool, and we can be confident that the technology is then working through what that looks like. They're very excited about those uplifts and changes to how we work in that.
[00:05:25] Speaker A: Space and just sort of pressing a little bit more. Do you mean sort of like critical attack path? That's what you meant before, correct. So why would you say just more generally why do you think this is important in terms of this announcement? Because, you know, yes, you might be, like, first in region, like other players are doing similar things. So what makes this sort of different, would you say?
[00:05:46] Speaker B: So, we know we're not selling this individually. We are selling contact centers to big clients, big four banks to federal government agencies, selling managed network services, SD WAN technology. We've got to be able to show that we can protect all of that data, all of those services, in the very strongest way possible. They invested a lot of time in the PoC, the proof of concept with this vendor, and they really did come out on top of all of the different partners we could have chosen to be able to give us that technology advantage to place on top as a layer over our security operations centers that we run through our Sydney Macquarie park facility. We've got the people Optus has always invested heavily in training, making sure we retain the right staff within our business, got deep mature partnerships with our partner ecosystem, Devo being one of them that's coming on board. You pull all of those things together. The services that we're trusted on by government, by enterprise, by the financial services industry, we have the right security back end to ensure that we're looking at all the latest threat paths.
[00:07:02] Speaker A: So I won't have time to go into it too much today, but would you say people still get confused between the difference between a Sock and a saw and then what would be the main difference?
[00:07:12] Speaker B: Yeah. So the Sock is to us. It's our facilities. We have two of them in Sydney. We have our G Soc, which is our federal government SoC. It comes with different controls, very much aligned to essential eight, the IsM protective security policy framework, no less secure, heavy defence in depth to make sure that all of that data is protected, almost air gapped as best we can. But also there's a lot of compliance and regulatory that we need to go through, be it ISO 27,001 or SoC two for the federal government industry. All of that forms part of our overarching SoC. There's a lot of training into the people within that space, a lot of focus on GRC, the governance, risk and compliance, making sure that we've got that as absolutely clean as we can. The most important thing for me would definitely be the discipline and rigor in your security hygiene, making sure you have that lens on all of your vulnerabilities, making sure that you patch in the shortest possible time, making sure when you have a critical vulnerability that comes in almost daily now, which is a huge ramp up from even where we were last year that we're patching that. We're working with our vendors and getting that fixed within 48 hours. All of that's your holistic sock. The same component is the 24 x seven cybersecurity event management piece. But what Devo gives us is that saw capability on top to really orchestrate and automate those events, to make sure that we understand in real time what the threats are. Really. In a nutshell, to summarize all of that, it's about looking deeper into our customers data and identifying and partnering with them through the methods that they're accustomed to today when they have a security event. That's what builds trust within the customer base.
[00:09:17] Speaker A: I'm joined now by Chris Gonzo, Gondek Solutions engineering manager from NetApp. So thanks for joining, Chris, and welcome.
[00:09:23] Speaker C: Thanks for having me on the show.
[00:09:24] Speaker A: Okay, so today in your presentation, you discussed flexible environment. So maybe, what do you sort of.
[00:09:30] Speaker C: Mean by that term, flexible environment? I guess it comes down to the ability to have freedom of choice in infrastructure. If we think about all workloads, operating systems, virtual machines, applications, databases, they all constitute, effectively, data at the end of the day, and data lives on storage. So being the intelligent data infrastructure company, we provide storage capabilities on premises, in data centers, as well as in all of the major clouds like Microsoft Azure, Google Cloud platform, and Amazon Web services. And so flexibility means the omnipresence of leveraging these infrastructures, the appropriate workload, appropriate cloud. But it's also flexibility in the functions of the storage technology itself. Multiple storage protocols, multiple data activities like classification, security, data protection and disaster recovery, and things like that. It's a flexible use of storage technologies for different data outcomes.
[00:10:30] Speaker A: Would you say storage is one of those things that seems to get a bit relegated by people? Like, it's like, okay, well, it's just storage. We put it back in mind. We don't have to think about it again.
[00:10:40] Speaker C: Absolutely. I think it's probably not front of mind when people think AI. They're probably thinking about gpu systems and the number crunching that goes on when you're doing generative AI, they're not thinking about the fact that 85% of AI machine learning projects fail because of data access issues. It's data that's being fed into these large language models. In other scenarios, when we think about security, there's a lot of focus on cyber security, not so much on cyber resiliency as a result. So network centric thinking versus data centric thinking, and when you think about compliance and governance needing to turn data into information so that we can classify it. These are storage conversations. These are data storage conversations. They're not network conversations or user access control. And lastly, it's the data itself that is the attack surface area in a security conversation. So being able to create fast resilient copies and fast recovery is a storage conversation as well.
[00:11:45] Speaker A: And I asked that question because I do speak across multiple different disciplines and it's something that I think people just forget about.
[00:11:52] Speaker C: Absolutely. And we're very supportive of that.
[00:11:55] Speaker A: And it's sort of like out of sight, out of mind. That's why I used the word relegate before. Okay, so there's a couple of things on there that I want to sort of speak about a little bit more. So you mentioned gonzo. Security, sorry, storage security by design. So may talk me through it.
[00:12:09] Speaker C: So security by design means that when you take a piece of hardware, for example, in our scenario, we make data center storage appliances, we fill it with high performance flash disk. That's just storage by itself. It becomes intelligent data infrastructure when we put our storage operating system on there, which we call OnTAP. Ontap by design services, many different, I guess, workloads through various different storage protocols. But inherently built into the system is something we call autonomous ransomware protection. Autonomous ransomware protection looks at data and the usage of data and creates a normal pattern of behavior. And that normal pattern of behavior becomes what the autonomous ransomware protection is looking for to stop cyber threat activity in its tracks and then create what we call a tamper proof snapshot. The snapshot is tamper proof because it's locked. It's kind of logically air gapped, and it cannot be removed or deleted unless multiple admins verify and multiple admins multi factor authenticate and agree that it can be removed. This would be a scenario like an honest mistake or a false positive, and we can retrain the model to improve the accuracy of the results. So security by design means that it's there, it's built in. Whenever you deploy a new volume, we have snapshots enabled by default. We're just going to assume you're going to want to protect your data. The autonomous rancing web protection works in addition to that regular data protection cycle.
[00:13:40] Speaker A: Definitely familiar with security wide design. It was just like you mentioned before. You've obviously explained the storage component. Would you say that people don't embed the security element through the whole storage lifecycle? If you want to call it that. Why do you think that is?
[00:13:53] Speaker C: I think it's because it's never been really thought about at the security layer. Sorry? At the storage layer, it's only been thought about at the security layer in terms of perimeter.
[00:14:03] Speaker A: So does that go back to my earlier question around storage is being relegated, people are forgetting about it. Out of sight, out of mind.
[00:14:10] Speaker C: Correct.
They're not making the connection. And the assumption that storage plays a critical role in cyber resiliency, they're just thinking network centric cyber security. And so a lot of emphasis on firewalls, intrusion, identity, et cetera. But inside the perimeter, where we're doing our work, where we're passing all of the perimeter security parameters, we're live next to the data. Now, if it's my user credentials that have been phished, then it's activity that's again bypassed. The perimeter happening on the inside. How do you detect that? How do you detect anomalous behavior? Or how do you detect when data is being exfiltrated without these storage mechanisms in place? These are storage functions, because then you.
[00:14:53] Speaker A: Touched on today in your presentation as well, around data at rest as well. So would you say with your experience, people don't really talk about field level encryption?
[00:15:03] Speaker C: That's a very good point. Applying storage security principles across the data lifecycle is very important to us. That goes in conjunction with efficiencies as well, because we don't just keep a primary copy, we keep a secondary and a tertiary, which means that if we're applying encryption and we're flowing that encryption down the line to secondary and tertiary copies, it's very hard to get efficiencies out of encrypted data already. We solve that problem by doing all the efficiencies prior to the encryption and then keep it encrypted through its lifecycle. Where we're using things like snap locking in our primary storage, we're also using things like object locking in object storage down the tertiary path to add to that logical air gap and to the recoverability factor when it comes to redacting, finding and redacting specific bits of information, that becomes more sophisticated technology at the application layer. What we can do from a classification perspective is find that sensitive personal information so that it can be redacted by another process. That process we call data classification, uses content indexing technology. It means if some data hits our storage, the classification engine using AI will open it, read it, contextualize it, and classify and categorize it into things like sensitive and personal information, non business categories, which you can fine tune to determine what constitutes non business. Once we've got that there, then you know, through our dashboards what is sensitive personal information to then go and do something about it. Should we put it in a more secure location, if it isn't already? Or do we need some redacting technology to play a role here?
[00:16:46] Speaker A: And so you'd be leveraging AI to do that?
[00:16:48] Speaker C: I'm assuming it's built in, yes. Our classification engine uses Aihdeme.
[00:16:52] Speaker A: What if you have more specific requirements? So would you be able to leverage your own sort of protocols to say.
[00:16:57] Speaker D: Hey, this is where we're at with.
[00:16:59] Speaker C: The technology today is we're getting as far as over 99% accuracy in classifying and categorising the data. What you do with it afterwards is subsequent activities. Right now, it's kind of rudimentary when we want to quarantine data, for example. So identifying gives us the results. What's the first thing we do with the results? We may want to immediately quarantine them because they're exposed. They've got sensitive personal information on a public cloud storage environment. Let's move it to a more secure storage environment. So quarantining more sophisticated and advanced processes beyond that, like redacting certain bits of information from within documents, salaries and that. Yeah, that would be an application process not currently within the NetApp capability.
[00:17:48] Speaker A: Okay. The other thing you spoke about as well is data gravity. So what does that mean?
[00:17:54] Speaker C: Data gravity or data having gravity means that every time that we create data, it will consume some magnetic storage somewhere which has ones and zeros, which needs to be powered, and that power is coming from somewhere. The data will attract more gravity to it, because I don't just have one copy, I'll make a secondary copy and a tertiary copy, and I'll be mandated by governance and compliance laws to hold onto it for long periods of time. This ultimately has a knock on sustainability impact and effect. So that data gravity, as it grows, creates a bigger carbon emission associated with it. We've already got some metrics. Like, every email you send emits 0.3 grams of carbon. So we've gotten down to the point where in our storage solutions, our sustainability dashboard doesn't just report on the metrics that the energy regulators want to see, like kilograms of carbon per terabyte. These new metrics, or heat btus for cooling. We're also offering ways to fix and improve the sustainability score for reducing that data gravity happens in a number of ways. One is data efficiency. Straight away, if you apply compression, compaction, deduplication, fin provisioning, tiering, you'll make a smaller data footprint, smaller carbon footprint. But we maintain those efficiencies when we make copies. So your doctor copy off site is also compressed, compacted, de duplicated, doesn't require as much network bandwidth, doesn't require as much storage in the destination. And then it's ultimate life cycle in the end, where we're holding on to it for seven years, efficiency is maintained. You get a three x ten x 15 x reduction over the life cycle of data, reducing gravity, reducing emissions.
[00:19:37] Speaker A: That was what I was going to ask you next, because you've just touched on sustainability. And so you'd be familiar with the UN global compact, that whole regulatory framework, and how they're assessing companies and.
[00:19:47] Speaker C: Yes, and we are part of it. The name escapes me right now. We have an ESG report that publicly talks about it. NetApp contributes to that global consortium on sustainability. And part of our commitment to it isn't just the fact that our technology helps customers reduce their carbon footprint. We're also very conscious about our packaging, we're also very conscious about our manufacturing processes, using green energy and stuff like that. Our score is multidimensional in contributing to that global consortium.
[00:20:16] Speaker A: It's just something that's starting in terms of, like a theme. I'm starting to see more of this. The only thing is, I think there are still companies out there that are doing the whole greenwashing.
[00:20:25] Speaker C: Greenwashing, absolutely. So two things that our sustainability dashboard really helps with. If the ACCC is coming after you to, say, demonstrate how you're making a commitment to better sustainability, the dashboard shows trending over time and how you can achieve and improve on your score through these data reduction activities, as well as maybe using greener energy sources you can actually input. If you're using wind power versus cold power in your specific environments, or the hyperscaler that you choose. The other side of this, there is a monetary one. Australian carbon credit units. There are rebates based on achieving better sustainability. That's our situation in Australia, what we're doing globally, in places like the EU and Singapore, there is literally no more data. It is mandated that you have your sustainability metrics reported in the language that the energy regulators want to see. Normally, when we think about data, we measure it in terabytes and megabytes and gigabytes. We can actually give you a kilograms of carbon per terabyte metric. We can actually give you a watts per hour metric. And this is really important to the energy regulators. Or if you're a service provider who wants to deliver a greener service or a green SLA and guarantee that you won't go above these thresholds of carbon emission, what's per terabyte.
[00:21:50] Speaker A: Joining me now is Gavin Jones, area vice president and country manager, Australia and New Zealand from elastic. So, Gavin, thanks for joining and welcome.
[00:21:57] Speaker D: No, great to be with you and great to be here at tech leaders summit in the Hutter Valley.
[00:22:01] Speaker A: So maybe, Gavin, let's start there. Talk to us a little bit more about what you've presented on today.
[00:22:06] Speaker B: Sure.
[00:22:06] Speaker D: Firstly, thank you very much for the opportunity. It was actually, first up, an opportunity to let people know a little more about what elastic is. A lot of people aren't aware of what elastic does. And so I was able to share that. I was able to share our vision for generative AI, which is not only an area that probably offers the biggest opportunity and potential for probably the next generation of people in Australia, but also, surprisingly and concerningly, Australia is lagging behind in the adoption. We actually commissioned a report that I can talk to some of the stats, or we've just talked about some of those stats, where we are lagging behind some of our competitors, both globally and also domestically. And then we shared some of the real opportunities and benefits of what generative AI offers. So some really exciting development.
[00:22:53] Speaker A: So wouldn't you say, generally, Australia lags in a lot of things. So, yes, technology is a big one. Other things as well. But I feel like that's a common theme as an undertone. Is Australia's lagging in this, this and this. Why do you think that's the case?
[00:23:07] Speaker D: There's a lot of. Lot of reasons. Firstly, just to give the statistics that our report turned up, this report we conducted on a global level to look at the adoption of generative aya, showed that Australia is lagging in the adoption and only has has embraced generative AI to the tune of about 42% of companies. That, compared to some of our counterparts in the Asia Pacific region, Singapore at 63%, and India at 81%, which is putting Australia at a real competitive disadvantage versus our trading partners. There are lots of barriers to adopting generative AI. I think everyone knows about some of the biases that come in with generative AI. Some of the false positives or hallucinations had to remember the term. There's some of the security, the privacy and the regulatory concern. But one of the things that we're hearing constantly from organizations is how they bridge that gap between the local context around your private and confidential company data and the multitude of generative AI capabilities that exist in LLMs and copilots out in the public Internet, I don't think anyone thinks they should be going and putting their confidential company data into chat GPT. And so where elastic sits, and this is what I shared in our presentation this morning, is we provide that trusted bridge between confidential company information and the context that's so critical to leverage the best of Genai and the multitude of LLMs and co pilots in the market to best enhance that customer and employee experience using Genaida.
[00:24:42] Speaker A: Okay, all right, this is interesting. So you're right. I've done a lot of interviews around, even, like government people, leveraging Gen AI to increase their productivity, increase their speed. Why would we want to do a monotonous task when we can leverage AI and tooling to do it for us? So going back to the point before, why would you say Australia's slow? Would you say we're a reserves market?
[00:25:03] Speaker D: I think there's been a lot of. There's been a lack of awareness of the benefits, so the ROI and the urgency for generative AI. I think a lot of people's initial experiences have been leveraging chat GPT to kind of doctor an image or build out a document, or respond to an email. That's people's first impressions. But going from that embedded use case that's in an application to actually looking at what your enterprise strategy is and how that aligns to your company vision and underpins your most strategic priorities, is where there's probably been a gap in terms of understanding the return on investment and the urgency to drive that. And so we see that kind of bucketing in three broad areas, how you actually improve customer experience to drive revenue outcomes. Some really good use cases that we're working with our customers on for those ends, how we actually improve customer efficiency and effectiveness and improve productivity. And that's been one of the ones that's been top of mind for many people. And then finally, around cyber resilience, how do we actually leverage Genai to improve your ability to respond to security threats and incidents?
[00:26:05] Speaker A: So just going back to all the stats that you presented today, the 87% considering increasing their investment in Gen AI. So again, do you think that 87% is because, like you said before, people think it's about, you know, images and those types of things? Is it that the awareness is still getting there? So doing interviews like this sort of encourages more the adoption or.
[00:26:26] Speaker D: Yeah, I think part of it's the awareness and an awareness not only of the benefits of Genai and why it should be an urgent priority, but also the awareness of how to overcome those challenges, because they are serious challenges. You talk about government. I don't think anyone would be comfortable with government employees sharing citizen data on public Genai applications. And so I think there's a real barrier in terms of broader adoption because companies are struggling with how to actually overcome those barriers to adoption. And that's what we shared in our presentation. There's probably seven or eight layers that are important before you actually embed this into your technology strategy. And so we kind of address it both at the application layer through our security and observability tools, but also importantly provide that platform that allows them to connect confidential data and the multiple LLMs, and especially ones that are relevant for the use cases they have, because that may be different for each different use case. One of the examples I was sharing earlier today as well was that if you've got LLMs that you're asking a question as whether the earth is flat, I and Reddit is one of the data sources, you could actually get a very credible response that says it is.
[00:27:33] Speaker A: That's where the hallucination comes into it.
[00:27:35] Speaker D: That's where the hallucinations come in. We allow organizations to work out which are the best LLMs and co pilots to use for their use cases so they get a trusted response.
[00:27:43] Speaker A: So I was recently in the US, and I interviewed the head of AI for Zscaler, and he is part of the World Economic Forum for AI, and he spoke a lot about hallucinations. So my question to him was, your point, the whole flat earth thing. And I said, if I went out there and ran multiple media sites claiming that the sky is purple, and then you integrated LLM into some of these sources, you would start to come up with that theory. So my question to him, and maybe to yourself, Gavin, would be who gets to decide what's credible or not credible? Because maybe I'm colorblind and the sky actually is purple.
[00:28:17] Speaker D: I think it's going to vary on every use case and every company. I think what we're seeing, and the beauty of elastic is that some workloads are best run in a private LLM, and they may choose the data stores, or sometimes it's actually being referred to instead of a large language model as a small language model that's only serving or gathering data. It's still billions of data points, but it's very restricted to trusted data stores that could be from the public Internet. But some companies may choose that that is too sensitive a use case, and they want to run that on prem elastic allows them to run it self managed on premise or in their cloud of choice, or on any of the three hyperscalers or any combination of the three of those. And this is the beauty of elastic's model. We allow companies to choose which is right for them. Could be purely on Prem, but never neglect the power of external large language models for things like cyber resilience, because that's where the threats are usually detected ahead of when the actual company themselves detect it. It's the broader ecosystem that recognizes there's an issue and those reports start flooding in.
[00:29:21] Speaker A: So going just focusing on the small language model. So I mentioned before, like government agencies, right? So they would probably use that as a use case sensitive information. They're not going to just, you know, pull it from wherever needs to be accurate. Kind of have these hallucinations that start to come into it. Is this something that government agencies will start to adopt, would you say, in.
[00:29:39] Speaker D: Your experience, this is where we, as elastic, help our customers work out which is the right model for them? More language models may be better served, better suited to use cases where it's a very finite number of responses that it would be trusted, and they may be a better use case to be run on premise. We help customers work that out. Other use cases where they trying to look at where there's security incidents and they want to leverage the world wide web and incidents and breaches that may be impacting thousands or tens of thousands of companies in real time, that's a much broader use case. So we allow them to blend that both, both of those best, the best of both of those worlds, but use that in a single data store and combine that with their confidential data that elastic has already indexed. That's kind of the perfect storm of using slms and LLMs.
[00:30:25] Speaker A: So going back to the flat earth example, have you seen a lot of this? And are we going to get to a stage where we're really like delusional and thinking, well, is this true? Is it not true? What's your thoughts then on that? Like, how does this sort of progress forward?
[00:30:39] Speaker D: Look, I think this comes down to the ethical considerations barrier. I think there's a lot of ethical considerations that need to factor into the way that you're going to use Genai, not just its alignment to your vision as a company and how you're going to serve citizens if you're government, but also the ethical considerations, how do you actually provide trusted responses? And that needs to be factored into people as they're thinking about how to adopt this more broadly. I think the other consideration that we're hearing coming up often, and there's been antitrust lawsuits around. This is the biases that come with some of the large language models. All of that needs to be factored in as you build out a strategy. But importantly, you need to be cross referencing multiple sources and working out which is appropriate for your customer and employee base.
[00:31:17] Speaker A: We're not quite there, though, in terms of the ethics around it, because it's a relatively new sort of thing concept. We're still trying to get ahead around it. So what sort of happens between now, as in now people are leveraging AI genai and then getting to the point where we have a North Star for ethical standards and regulation around it. But what is the in between part? What are we going to see happen?
[00:31:40] Speaker D: Yeah, it's a good question. Look, the truth is this will evolve over not just the next two to three months, but over the next decade. I think what we are seeing is there are some really strong, powerful use cases that companies, australian companies, need to be embracing now. Things like fraud protection, where you can actually do multifactor authentication of users based on their social media profiles so they don't fraudulently or defraud the government of funds that are meant for disaster relief or our social welfare programs. There are so many use cases around how we get a better experience with live chat and chatbots, things that we can actually help in workplace search. We've got a multitude of use cases across those three domains that I talked about before. The company should be leveraging now. The broader, more ethically centric use cases, I think will evolve over time, but we can still drive massive competitive advantage for companies leveraging technologies and approaches that have already been validated and tested today.
[00:32:36] Speaker A: Okay, I'm joined now by Jeff Schomburg, regional vice president, APJ, from Ubico. So, Jeff, lovely to have you back on the show.
[00:32:43] Speaker E: Great to be here. Thanks, Krista.
[00:32:45] Speaker A: So I have done some reconnaissance in the space, which I often do, and today I messaged Asizo to get them to ask more about Ubico. So one of the questions they asked me, so this is specific from them, they work in a retailer and they said, what would be your suggestion to get the business to transition into MFA, even though they should be doing it? So perhaps, perhaps if you can steer clear of an obvious answer so we can say awareness, we can say this, but what really is going to move that needle?
[00:33:17] Speaker E: What's going to move the needle? So move into MFA. The first question would be, which MFA? Not all MFA is created equal, so we would encourage them on that journey to move to something that is strong, that is fishing resistant, that's kind of a starting point and uses other challenges. It's about encouraging adoption. Most of, unfortunately, what we see is it's a change program of how do you encourage your humans to adopt a different approach? Now, there's two ways you can do that, and this is the non standard answer is the carrot or the stick. So we're adopting MFA because we, as an organization, want to be more secure, so we can take the stick approach. And that says, I'm from it. This is what you're going to do. You're going to have to adopt this because it's good for the organization. That generally doesn't sit so well. And maybe the approach is the carrot, which is encouraging and showing the user the benefit of why they're doing this, to make their life easy. The fact that they can log in and authenticate, which they have to do every day, easily, quickly and without a password. Now, when you say that without a password or passwordless, people start to prick up their ears, go, excuse me, what did you say? Passwordless. I don't need a password, I never have to change my password again. That's right. So that benefit automatically starts to encourage curiosity and hence interest to adopt something that actually makes their life easier. Then, as we talked about today, the phishing resistant user in all aspects of their life, MFA, from the business point of view, it's been seen as a business tool. And if you've got your rental car, you don't put air in the tyres because it's not your car. You don't worry about putting oil in the engine because it's not your car. If it's your car, personally, you do, because it's of value to you. So if we can make MFA authentication more broadly acceptable for all parts of their life, in terms of the consumer services, they access, the government services, then it becomes really valuable to the individual because they know that their organisation is helping them secure their personal world. So there's a real value that can be attracted to that. And once you've got over that hurdle, then it's about the traditional sort of change program and transformations of, you know, communicating, educating, training, finding the key adopters, encouraging those early wins and all the stuff that comes out of the transformation and change textbooks. But try and find the carrot or the stick. And we generally find that in doing this, the carrot approach works much, much better than the stick approach.
[00:36:01] Speaker A: So would you say as well, just from a frustration point of view, in my experience, working in corporates and enterprises, even resetting your password, how much time that takes or. I don't know my password. I gotta go down to that. It help dirt desk dude downstairs. Even that takes time in terms of productivity.
[00:36:15] Speaker E: Absolutely.
[00:36:16] Speaker A: Do you see that's a key driver though?
[00:36:18] Speaker E: Yes. And just what you experienced. I was thinking then, I know what I'm like when I've got a login and I can't remember the password. My frustration level goes up, my tolerance level goes down, my anger level goes up. Frustrating. So how do we take that away? Yes, we can. And the business benefits of productivity, depending on what your environment is, if it's my mum who's logging in, that productivity is not important. But if I'm in a retail environment or manufacturing environment where time is critical, then our research and those by others will show that this authentication method is at least four times faster. So think about that as a productivity benefit for the user and then productivity in the IT support team, because 90% of their calls to the support desk have gone away because they're not being asked to reset passwords. They're not being asked to. I've been locked out of my account. Please reset. So there is a productivity gain at both ends.
[00:37:16] Speaker A: We don't have time. If you're on a POS terminal and you're having to reset your password or something's happened and you've got a whole line of people, what do you do then?
[00:37:24] Speaker E: You've lost a sale and you've got a disgruntled customer. That wasn't great customer service. So it is important at that front end to make it as easy and simple possible.
[00:37:36] Speaker A: And there you have it. This is KB on the go. Stay tuned for more.