February 09, 2024

00:37:20

Episode 241 Deep Dive: Mary Carmichael | Workplace Evolution: Addressing AI Skepticism, Embracing Advancements, and Navigating New Realities

Episode 241 Deep Dive: Mary Carmichael | Workplace Evolution: Addressing AI Skepticism, Embracing Advancements, and Navigating New Realities
KBKAST
Episode 241 Deep Dive: Mary Carmichael | Workplace Evolution: Addressing AI Skepticism, Embracing Advancements, and Navigating New Realities

Feb 09 2024 | 00:37:20

/

Show Notes

Mary Carmichael, CISA, CFE, CPA, is Director, Risk Advisory, at Momentum Technology (Vancouver Canada), and member of ISACA's Emerging Trends Working Group and Risk Advisory Committee.

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: I think a positive impact is that the issues with gender of AI will be resolved. So part of that is that the vulnerabilities such as the hallucination, the biases, there will be controls implemented in place to correct that. And also I think there's hope in terms of organizations will catch up in terms of developing an AI vision strategy and providing support through training as well as organizational change management. [00:00:27] Speaker B: This is KBCAT as a primary target for ransomware campaigns, security and testing, and performance risk and compliance. We can actually automate that, take that data and use it. Joining me today is Mary Carmichael, vice president from Asaka Vancouver. And today we're discussing Isaka's new AI study and discussing the various attitudes, concerns and preparedness for AI. So Mary, thanks for joining and welcome. [00:00:55] Speaker A: Oh, thank you Karissa, for this opportunity. [00:00:57] Speaker B: So I had a look at the report, read the highlights, and there was a poll which was interesting, and I know we're going to get into the specifics as we go along this interview, but the poll found that many employees at the respondents organizations are using generative AI even without policies in place, which is understandable. But then 63% say employees are using it regardless. 36% permit this in their workplace. So walk me through this, I guess. [00:01:30] Speaker A: For the 63% of employees are using regardless because it's useful. When I think of chat GBT, especially the 3.5 version, it's free, it's accessible, there's low barriers to entry, it's very easy to use. And also when you look at the results as capabilities in terms of producing content, helping you with day to day or manual test, in terms of comes to your writing, why not use it? And also in terms of the policies, I think it's very difficult to enforce. For example, one policy may give guidance saying you can use 3.5 version just for some down communication, but how do you actually enforce that? So even if you have a policy, I think with employees, it's very easy to work around those policies because there's limited controls in place to actually see what is happening behind the scenes. So I look at employees that are using these tools, it works for them because it makes them more productive, gives them a lot of benefits in terms of producing their work quicker and also possibly more higher quality. As for the 36% that permit in the workplace, I do want to kind of explore that a bit further. Like why is there a gap in terms of organizations that have policies versus not? So I think some industries, when chat GBT was released in November of last year, some of them experienced negative consequences immediately and as a result, had to develop policies. So, for example, post secondary universities, students can use version 3.5 and develop essays and use that tool to support their assignments. So part of that is schools had to develop policies right off the bat because there was a negative consequence and people were using those tools. So I think in some industries, especially in industries that are highly regulated or also industries that have privacy issues, they had to develop policies or also be fines. So I do think in terms of employees, I know people that actually purchase their own license for version four, paying $20 a month, because once again, it's productive and it's a benefit to their work, but their organization may or may not have a policy. [00:03:28] Speaker B: Yeah, and this is interesting because, again, as you alluded to in terms of Chat GPT more specifically, being more ubiquitous, it was only about a year ago, so it's a bit hard to sort of come up with some full blown policy like that quickly as well. Because some of these policies, even if you want to look to HR like some of these things, yes, they get updated, but some of these things have been in place for years and years and years. So do you think as well that it's just very early days for people to be like, okay, we need to adhere to the policy per se? [00:03:55] Speaker A: I think depends on your industry. So for some industries, like the ones that are in banks, in terms of, that are highly regulated and need to protect personal information they've issued, no, we do not allow Chat GPT usage. So I think it's easier for industries to develop policies when there's a regulation in place saying that you can't use such a tool. So I found that's the case, especially in terms of Morgan or some of the banks, you can't use Chat GPT. Other organizations, I think they're just waiting to see what happens from a regulatory point of view. So in terms of Vancouver, even what's happening in Canada, it's almost like a wait and see approach, because the concern is creating a policy that is not beneficial or not useful and also difficult to enforce. So, for example, if you create a policy for chat GPD 3.5, how do you actually confirm that employees are not using that tool inappropriately? There's no centralized management of those conversations. So are you going to go around to the various employees, ask to see their chat conversation log to confirm that they're using that tool appropriately? So even if you use a policy, once again, for the freebie tool or for some of these low cost solutions, it's very difficult to provide assurance that people are adhering to those policies. [00:05:03] Speaker B: Yeah, absolutely. So do you see that as potential problem now? Because, okay, just so hypothetically, it's forbidden because you're in a regulated industry, you're not supposed to use it, but there's always going to be a few people that are like, oh, I couldn't be bothered, I'm just going to use it regardless. So how are people going to audit this type of stuff, or do you think it's just this is the problem that we're having now and we're trying to come up a solution to make sure that we have the right governance in place because you are going to naturally have people that are just going to want to use chat for the convenience. [00:05:31] Speaker A: That is a problem because part of the policy is you need to actually be aware of capabilities and have communication and training to staff. So you can't deploy a policy in isolation. It has to come with training and also discussion about what's acceptable versus not in terms of behaviors and user scenarios. And that is lacking. I think that's actually another key finding from the pulse poll is only 4% have received training. So I think it's not really a policy by itself. It's more, you need to have an ecosystem in place to support the policy through training support and also constant conversation in terms of changes in chat, GPT and how that impacts the business. And I think right now there's a shortage in terms of who's responsible or who's accountable, but also knowledge in terms of understanding AI and its implications on organizations. [00:06:19] Speaker B: So that leads me sort of to my next point around training. So my understanding is the lack of familiarity in training. More than 57% say that no AI training at all is provided. But would you sort of say, Mary, that in the eyes of the world, if you want to call it that, that gen AI just sort of feels new, overwhelming. And again, going back to my earlier point, maybe people just haven't had enough time to create some comprehensive training yet. Like a year is not that long to sort of implement quite an extensive training within an organization. In a perfect world, yes, but these things do take a bit of time. What are your thoughts on that? [00:06:59] Speaker A: I agree with that, but also this kind of reminds me of typical software development projects. When we do deploy software, one of the number one complaints is lack of training, not preparing staff, having organizational change management plan. So when I look at even traditional IT projects, same thing, training is an issue. And I'm not surprised that that theme carries forward to AI, but also I do want to highlight our business environment. So in the past three, four years, we had the pandemic. We had people working from home, we had digital transformation, moving to the cloud, new ways of working remotely. And now you have AI. And AI in itself is complex. So you have new terminology like machine learning, LLM, GPT. So what does this all mean? So I think, yes, people are overwhelmed. And also when you look at news reports about generative AI, they tend to promote fear, uncertainty and denial. I found the press in terms of generative AI has not been that great. So yes, I can see people feeling overwhelmed because generally life is overwhelming at this point, especially, are we approaching a recession? Interest rates. So you have this generative AI, I think, being layered over a world that's very complex and volatile to begin with. And then also in terms of training and education, I think that when you're looking at generative AI, it's understanding its capabilities and how it applies back to the business. So this goes back to strategy. So understanding our business model, looking at our value chain, and seeing where the opportunities are for generative AI, especially from a customer experience point of view. And then part of that is doing a risk assessment, identifying those potential use cases. And then from there it launches into a project where you would train and educate staff. So I think even with training and education, that comes further down the deployment road. I think part of that is you need to have a business that's keen to use AI, but also staff that have time to export its capabilities. [00:08:54] Speaker B: Okay, so there's a couple of things in there you said, which was interesting. So you sort of mentioned, of course it's overwhelming because of everything happening in the recession and interest rates, cost of living. So do you then sort of just envision that the gen AI conversation is just getting sort of pushed down the ladder because other things are taking priority. So therefore it's not ideal. But again, we've got to prioritize. We can't solve everything in the one go. So do you think there's that element to it then as well? [00:09:19] Speaker A: I think that's the element too. There's a lot of things going on, and that's part of the picture. And also with generative AI, there's lots of, I think, negative press. I think part of that is when I attend conferences and people talk about their generative AI, they're approaching it from a position of distrust. They're thinking sometimes it's overhyped. There's lots of problems, especially with this hallucination. So they may not see the value. So, for example, I presented at an internal auditors conference last week, there was 50 people in attendance in the room, asked about who was using Chat GPT. Only four out of the 50. So I was thinking about that retrospectively, and I think part of that is I think there's early adopters that have embraced Chat GPT, especially in communication, cybersecurity, programming. But there's also other professions where I think it's almost, wait and see, the press is negative. So right now there's a questioning about the value, like why should I use this tool when the output is not 100% correct and I still need to confirm it? That is not, I guess, a valuable tool in some people's minds. And that kind of goes back to the conference last week and also to some of the thoughts I've been hearing in Canada. [00:10:27] Speaker B: Yeah, that's an interesting observation. So four out of the 50 people just didn't use it. So were they not using it because they didn't trust it, or they're not familiar with it, or was there any sort of insight then on that front? [00:10:40] Speaker A: I think it's the profession because they're mostly accountants and internal auditors. So I was just thinking about who are the early adopters? Because there's some really strong user scenarios, or I call use cases for generative AI. Just think with the auditors and also accountants, I'm actually a CPA. So just to reveal that part of that is you're skeptical to begin with. So here's a new tool. There are problems. It's in the press. So part of that is it's not ready for prime time and also at the same time, job displacement. So when you look at the accounting profession, our role, especially in terms of some of the transactional work we do, are perfect to be moved to an AI system. So I think part of that is the potential job displacement. So I think you have what I call almost like a change management issue, just with the accountants moving forward, we need to see how generative AI impacts our profession and see what our role is moving forward. And maybe our role will change to more of our regulatory, looking at auditing algorithms, AI systems supporting regulation, versus some of the transactional items that can be, I think, automated with AI. [00:11:44] Speaker B: Yeah, look, totally hear what you're saying, and that is a great point, and I'm definitely of the same opinion. But one of the things that I want to ask you just before we move on, and of course we're going to address the job component, the job placement side of it, in a moment. But do you think, Mary, that people still, there's probably experts out there that really understand Gen AI, but maybe it's still pretty new, right? So I don't know. And maybe this is a question for you. Do you think that there's anyone really out there that's like, I really get this intrinsically because again, we still don't know what we don't know. We're sort of traversing into uncharted territories in terms of, okay, this thing has this capability, but where will this capability be in two months, in two years? So I think there's a lot of things that perhaps still needs to be addressed, which probably goes into people being apprehensive around leveraging the tool. [00:12:34] Speaker A: I agree that, and when you look at even Chat GPT, in one year it went from version 3.5 to version four to Enterprise edition. Now it has plugins plus APIs, and now you can use OpenAI, just the development tools and other applications. You can use that for your in house applications. Plus you have Microsoft copilot. So this is a span of twelve months. And once again this is a disruptive technology. So I can see why people are feeling new and overwhelmed. Like how do you make sense of all these different products, when to use that? [00:13:03] Speaker B: Yeah, absolutely. And I think that's a really valid point. And I think these are the thing and that's why I think having this conversation is valuable. And it's not about having all the answers, it's just around having this discussion. What are people concerned about, do you think though as well? On the media front, I agree with you again, there is a lot of negative propaganda around Gen AI and the functionality around it. But sometimes the way in which I look at it is, well, we have to sort of evolve as society. And to your point earlier, if we can automate some of these more menial tasks, then why shouldn't we? Which means that we're going to get people to be leveraging their skill set in a more strategic manner. But do you still think that that worries people though? [00:13:40] Speaker A: Change always worries people. So I think my background I'm used to, I deploy a number of technology systems, whether it's like SAP, Peoplesoft for financial systems, and I found one third of people are on board, they understand the urgency, the need. One third of people sit on the fence, wait to see what happened, and one third are very negative and will not support the project. So I think that's just human nature in general. It's change is hard. And I think if you want to move ahead with change. This is where you need senior management support to talk about why, what's the urgency, deliver key messages, and also be very specific on the user scenarios like how this will benefit you, how this will help you, and also how we'll transition you in terms of moving towards performing a role that has more higher level of responsibilities. So I think that's just human nature in my experience with it. That's why you have organizational change management, or even what we call human risk management, is to support people through this change. But part of that is you need to have senior leader support and a very clear use cases for how you use this technology and how workers will be supported through this change. [00:14:43] Speaker B: Yes, absolutely. You make a lot of sense. So then I guess my next point, just going back to the report, only 32% of respondents indicated they have a high degree of familiarity. So sort of, again, to my previous point around, well, how well do people sort of know this? But then what does sort of high degree look like from your point of view? [00:15:04] Speaker A: Just from my experience in terms of attending various conferences, are you able to define generative AI in terms of provided definition, have you actually used a generative AI system, whether it's like dally, mid journey, or even Chat GPT? And are you able to speak about its capabilities? I think that's to me, more of a low level definition of high degree familiarity I found in these conferences, or even I attended a conference today. We have to go back to the basics and start defining what is AI, what is machine learning, what is LLM, what's the difference between machine learning and LLM? And this is for a technology audience. So I think even with the presentation last week to the auditing internal audit, I don't actually walk through what generative AI is and provide some specific examples of use. So I'm not sure if people are lacking curiosity or if they're experiencing fatigue. But right now, compared to twelve months ago, I just found I'm still focusing on more basic definitions. [00:16:01] Speaker B: And do you think that those basic definitions will become clearer and more defined as we sort of progress through this change in terms of gen AI and chat GBT? [00:16:10] Speaker A: Well, I found at today's session, I attended a conference in Vancouver and the speaker watched the definitions, but it kind of goes back to are people listening or not? So I think the people that are keen, yeah, they were quite engaged in terms of understanding a bit more about what is AI, what's the different streams. But some other people were focusing on the distrust and the vulnerabilities of these systems. So I think it depends on whether or not you're willing to listen and embrace. And if you're not, then I think we're going to repeat this conversation moving forward. [00:16:40] Speaker B: Would you say generally, and I know this is a bit of a hard question, but if you had to, just based on your experiences and conferences you've been going to, you think generally most people are distrusting of chat, GBT and Gen AI, or do you think majority of people are trusting? [00:16:53] Speaker A: Majority are distrusting. Yeah, that's been a universal theme, which I kind of found surprising because I do think has a lot of potential. But I think part of that is just once again the news. There's lots of drama, especially with OpenAI and its recent board issues and whether or not it also goes back to ethics. So right now we have innovation versus ethical behavior. So we are pushing out these generative AI solutions, but is responsible AI being incorporated? So when these systems are being deployed, is there good software coding in terms of minimizing biases? Are we being mindful of privacy? Are we considering the human impact? So I found when I attend these conferences, there is a bit of negativity once again in terms of how quickly the software got pushed, people's lack of readiness, but also just some of the Microsoft OpenAI, Google drama in terms of innovation versus ethical behavior. [00:17:47] Speaker B: Yeah. And look, I think when any of these things that are new, like even when the Internet first came out, I know people were sort of like, well, I don't know about this, but then, now imagine our society without the Internet, people would struggle. So do you think it's just going to take a bit of time and what, I mean, time like a few years for people to maybe be more trusting and more understanding because we've had more time and we can actually leverage it for good. Of course, with anything, there's going to be the double edged sword of there's people going to be using it for bad. I get that, but there's always good and bad in any situation. [00:18:21] Speaker A: I agree with you. I started my career actually during the digital transformation. So I remember back in 2008 I was working for a large organization and there was a conversation on whether or not they should have a website or not. This is 2008, so I think at the time they were just questioning about the website, what's the value? It takes a lot of money to support it. And eventually they made the decision to go ahead and start deploying online services and have more of a digital platform in terms of customer service delivery. But that took a lot of conversations and just also envisioning its capabilities like the two b. So I do think it's going to take some time and hopefully there are leaders that can paint a really great picture in terms of the potential of this technology, but at the same time making sure that we have controls in terms of being ethical in its deployment. [00:19:08] Speaker B: Okay, so then let's flip over to the negative side. Well, more so, just exploring that. You always got to look at both sides in journalism. So going on the concern side of it now and the exploitation, the poll explored the ethical concerns and risks associated with AI as well, with 38% of ANZ respondents saying that not enough attention is being paid to ethical standards for AI. About this. [00:19:37] Speaker A: Oh yes. I think it kind of goes back to when chat, GPT was first released. He had version 3.5, version four. And the feeling in Canada is like, wow, you're deploying this solution that has a number of vulnerabilities. Do you care about the impact on people? Because that was causing, I found for people some mental anguish in terms of, yeah, this technology, will my job be replaced? Should I start looking for a new work? So people weren't prepared from, I think maybe more from a mental health or even a workplace preparedness point of view. But I found what's interesting about this ethical concerns is the number one concern is misinformation, disinformation. And when I was doing a presentation at a conference in Europe, this was a big topic in terms of privacy, misinformation. How do we use ethical AI, and especially responsible AI, to make sure when we deploy these systems that we have values in place to consider the societal impact, especially in terms of job displacement, human rights, especially in terms of any type of biases and privacy, but also making sure that we have clear cases where AI is unacceptable, especially in terms of surveillance. What I also found interesting about this poll is in North America, people were a bit surprised that misinformation, disinformation was number one. They felt actually cybersecurity, social engineering should be the number one risk. So I thought it was interesting from a global perspective, where the concerns was whether it's misinformation or even cybersecurity. But I do want to highlight just going back to ethical concerns and risk. This kind of goes back to what we're seeing right now in terms of innovation versus what we call AI safety. And this kind of reminds me of the conference that was held back in London, I think it was a few weeks ago, the AI safety summit. And part of that is there's a number of governments in terms of they want to be an AI leader, like a global power in AI. So part of that is how do you promote innovation? Because part of that you want businesses to invest in your country, set up AI companies, or this one's going to strengthen the AI industry in your company, but at the same time is there is a safety concern. So part of that is we've seen what happens when AI has gone wrong in terms of discrimination, in terms of algorithms that have biases and also incorrect information. So this is where I kind of see values versus interest. So your interest is you want to be an AI power, but will you have the values to kind of manage that balance in terms of promoting innovation but making sure that its harmful impacts are being mitigated? Also from, I think, an open AI and Microsoft point of view, like Microsoft has invested over $10 billion and I'm sure they want to have an ROI sooner versus later. So the question is when they are using responsible AI, are they sometimes doing their own business? Case in terms of we'll go ahead with this feature because it's cheaper for us to fix later and deal with impacts later versus waiting. And if you want to go ahead and have more of a dominant market position, you may take that risk. [00:22:34] Speaker B: Yeah, but this is where I think it's really interesting because when we say responsible AI in the eyes of who, because if you're like looking at it from a Microsoft lens versus another vendor versus someone else, everyone's going to have a view on what ethical looks like. This is where I think it's going to get really interesting in terms of how things are going to play out. Is it muddying the waters now? Because now we're already saying, oh, well, that's ethical versus not ethical. How do you think that's going to look, Mary, moving forward, I think it's. [00:23:00] Speaker A: Going to be messy and that's probably where they're going to get regulation involved. So in terms of, kind of reminds me of the AI act. So having a risk based, so assessing your AI systems and depending on the level of risk, that's where you have your controls in place in terms of what is your responsible AI model for designing, developing, implementing that system. So yes, it's going to be muddy. And when I look at the AI act, everything's all about definitions. You have to define what is an AI system, what's an AI model. So I can see that taking some time in terms of what is an ethical principles will follow and how do those ethical principles translate into development practices for an AI system. So yes, it's going to be messy and trying to get agreement among multiple countries and also harmonization. So with each United States, Canada, Australia, the EU, they all going to probably have their own varying AI regulations. So if you're a company that's global, how do you manage that moving forward to making sure your responsible AI practices adhere to these variety of global regulatory regimes? [00:24:04] Speaker B: So the other part which was really interesting to me around the statistic, which was 38% of ANZ respondents saying that not enough attention is being paid to ethical standards of AI. So I want to know, what do people want to see? What does paying enough attention look like? More meetings, more regulation. What does that actually mean though? Like in the eyes of someone saying, okay, well, you guys have given this more attention, now I'm satisfied, I think. [00:24:30] Speaker A: Of the company anthropic because the founders of that company actually left OpenAI and said their goal was to establish an AI company that's focused on safety. And they said they reflect that safety in terms of slowing down the rate of their product releases and also being transparent about the model and also sharing what the responsible AI practices are. So I think you need to outline what ethical standards are, what your principles and how that translate, and you have to make sure it's transparent in terms of sharing that information. Also the model itself, is it explainable for me? Am I able to see what your model consists of? Are you sharing that information and also doing self reporting? So I know a number of companies have signed as a code of ethics in terms of responsible AI. So how are they showing that code of conduct? So are they discussing their processes? Are they going for some sort of certification process to show that they're adhering to ethical standards? So I think to me that needs to be defined as well. And how does that translate into software development activities? [00:25:32] Speaker B: So now I want to focus on impact on jobs. Now this is something that you and I both know people are worried about. They're concerned about what does this look like? There's a lot of questions, there's a lot of media propaganda about the terminators coming in and taking over our world. So let's start with that as a very open ended question on what your thoughts are, what's your read, and any sort of statistics you want to roll in there. Quite happy to hear them. [00:25:55] Speaker A: I think this goes back to open AI. And they mentioned in terms of certain jobs, like 20% could be potentially eliminated based on using this, based on AI. And I do want to highlight, there are certain jobs that are candidates to be replaced by AI. So in terms of high volume, repetitive activities, those are jobs that are perfect to be automated. And it kind of goes back to my roots in terms of digital transformation going back 15 years. So when we were implementing these systems, we're placing what I call in person service delivery to online service. We automated a lot of the customer service functionalities and I kind of see that same trend happening with AI. So there are certain job types, whether it's customer service accountants, especially in terms of back office with accounts payable, accounts receivable, financial reporting, that once again, high volume, it's repetitive. You can use an AI system to replace that. But at the same time, I do want to highlight there's a number of new jobs that will be supporting AI. I saw the same thing with digital transformation and also with social media. So you have social media analysts, consultants, different roles in communication. So when you look at AI and supporting that moving forward, you'll need to have trainers for your AI, also training staff, as well as change management consultants. Also for the regulatory environment to support that, you'll need to have auditors that's able to assess the algorithms and also more risk management. Risk management to assess AI to understand its level of impact and whether it's a high risk, medium or low risk system to society. So yes, some jobs will be impacted. Those are ones that are more transactional focused like customer service, but also it will open up new jobs moving forward. And also going back to the OpenAI research, they mentioned that this tool will be an assistant, so it also will make you more efficient. So yes, you will be impacted, but you'll be able to do your work quicker, especially in terms of writing some of the analysis or even programming. With programming, you can use generative AI to optimize your code. So some jobs will just become more efficient and also that opens them up to take on more additional or higher level responsibilities. [00:28:05] Speaker B: Yeah, this is so interesting because again, people have even asked me like, well, what do you think about what's going to happen? And look, I don't have all the answers, but it's about asking the questions and getting people like yourself to come in and answer them. But the other thing I want to sort of raise with you as well. And you sort of touch on that right there at the end. Mary around? Well, yes. Okay, it's going to automate some jobs, but it's going to open up new jobs. Do you think that's the part people aren't focusing on? Yes, things will be automated but we're going to open up new jobs. Is that the part where people are maybe not aware of as much? They're probably just thinking it's being automated and there are no other jobs, then? [00:28:43] Speaker A: I would agree with that. But there are some professions that are trying to have that conversation. For example, the cybersecurity field. So when I attend cyber conferences, they're exploring generative AI and how generative AI can be an assistant to make them more efficient. So I think some groups are more early adopters of the technology and they're seeing how they can use that and also what new roles are required. As for, let's say, auditors or internal auditors, I feel that additional conversations need to be have that group and also once again just talk about the use cases and provide specific examples. So for me this kind of goes back to organizational change management. So we need to talk about the future in terms of what it looks like, but also the opportunities and how to prepare staff. Just going back to cybersecurity here in Vancouver, there's been a number of sessions in terms of reskilling. So how do I use gender of AI in my job or some of the courses? So there's been quite a few networking events just to share, people sharing their knowledge and experience and how to plan moving forward. I do think other professions should have those conversations or have the same type of dialogue. [00:29:44] Speaker B: So in terms of the new jobs opening up, but wouldn't we say, even when the Internet came out in the 90s, people probably worried, but then look how many jobs that's enabled and opened up. But as a society, there's always going to be jobs that probably no longer exist. Like blacksmiths don't exist anymore. Not that I'm aware of. And we have to evolve though. So this is just another evolution. And I get it. We don't know what we don't know. But if you actually look back in history, jobs that used to be done in the 50s, we're not doing them anymore. And maybe this just feels more prominent and more in your face because we have social media and things like that we didn't have. So maybe people are more aware of it at a deeper level. But that's just the way evolution has gone though, wouldn't you agree? [00:30:28] Speaker A: I agree. So I think the stat is 50% of the jobs are yet to be created moving forward. So I have eleven year old daughter, so I'm kind of curious to know what career she'll have. And the same thing when I started my career back in 2008, yeah, just kind of like once again the social media was new, even the data analytics field in terms of big data. So there was a lot of changes when we moved these systems or set up these ERP systems. So collecting data, more digital, more analytical skills, opening up access on the website, these digital platforms. So that introduced new jobs. So I'm kind of curious to see how this plays out and also where my daughter ends up. [00:31:04] Speaker B: Yeah, you're absolutely right, because now back in the day, even when I was growing up, people weren't YouTubers and things like that, whereas now they are. That's technically a job, it's a profession, it's what people do, content creators, all those types of things. That wasn't a thing, though, not even that long ago. But these jobs, because of the Internet, have enabled new career paths. And then I just think that maybe it's just going to take a little bit of time until we define those 50% of jobs and what they actually then look like. But then my other question would be on the education front. So in Australia they've done a big investment, I think with Microsoft as well tape New South Wales and a few of the universities here around creating these micro credential courses, do you start to see more of a trend towards that? Because historically, I mean, now just if you're doing a side degree for four years, a lot of things that you learn by the time you get out in this day and age will probably be atrified to a certain degree because it's like, well, we've sort of already moved on from then. So do you sort of see as well, Mary, that we'll see more of these micro credential courses, more smaller courses, because again, you may finish the course and the next sort of thing comes out and it's probably different if you're a doctor or a lawyer or something like that. Whereas though things are a little bit more static and not sort of at the velocity in which it and cybersecurity move out. So do you sort of see that as a trend as well in terms of education and upskilling and reskilling? [00:32:21] Speaker A: I agree with that. So I see in terms of the knowledge is very dynamic. It's going to be changing over the next year or so, especially as we have greater generative AI capabilities and also exploring AI usage in different fields. So I do see the micro credentials, I do see the networking and also even online in terms of deep learning AI, Microsoft with its courses and even LinkedIn learning, they have some pretty good courses when it comes to generative AI. So I found there's a lot available once again through micro credentialing online. And also a number of these courses are free. When I look at the Google offerings in terms of generative AI, prompt engineering, those are free courses. So I found even companies, they need to develop skill sets in this area, so they are offering training. So once again through credentialing, micro credentialing, online services or even partnerships with university, or sometimes even hosting a workshop or even event, for example at the SacA Vancouver chapter here hosting event just to talk about in terms of what's happening in general of AI and potential opportunities, especially in cybersecurity. [00:33:22] Speaker B: So I guess with any of these conversations, there's always a little bit of light at the end of the tunnel. So optimism in the face of challenges as well. And although AI is uncertain and risky, and we don't have all the answers, but 76 cent of respondents, according to a report, believe it will have a positive impact. So I'm curious to know what type of positive impact are we talking about here? [00:33:43] Speaker A: I think a positive impact is just based on the conference I attended today, that the issues with gender of AI will be resolved. So part of that is that the vulnerabilities such as the hallucination, the biases, there will be controls implemented in place to correct that. And also I think there's hope in terms of organizations will catch up in terms of developing an AI vision strategy and providing support through training as well as organizational change management. So I think positive impact means it benefits me. I'm productive, I feel comfortable, I feel successful using that technology. And also I have a career that I can moving forward in terms of reducing job displacement. But also positive impact means society. So with society, misinformation is reduced. So we don't have to worry about deep fakes. There's tools to recognize what deep fakes are. So the impact on democracy or elections, but also ensuring that we have proper control, especially in terms of privacy, making sure that human rights protected, as well as capable cybersecurity professionals to minimize the social engineering, other risks of these technologies. So I think when I look at positive impact, I think it's from a personal point of view, but also from society. [00:34:53] Speaker B: Yeah, absolutely. And I think that we're just sort of scratching the surface with the power and the capability of what this is going to look like. And I'm always for doing things differently and change and efficiently, of course. And I understand that we're creatures of habit and we don't like change, but I think this change will be worth it. And of course, like I mentioned before, there's always going to be an element of the bad side of things, but hopefully with the right types of conversation with people like yourself coming on the show to really demonstrate, hey, yeah, there is some bad elements to it, but there's a lot of great ones as well. So do you think that in your experience with what you know, that perhaps people generally over time will become more accepting of AI and generative AI moving forward, or where does that sort of land with you? [00:35:40] Speaker A: I think kind of goes back to proof in companies actions. So what happens with OpenAI, Microsoft, and even the regulatory regimes that are being proposed in various nations? So kind of goes back to what actions happen moving forward. And is there progress? I think if people see that, then they'll be more at ease. [00:35:57] Speaker B: So, Mary, was there any sort of final thoughts or closing comments you'd like to leave our audience with today? [00:36:02] Speaker A: I guess for me, I view AI as a positive technology. So I think part of that is having conversations. I think it's important as a community or a technology community that we talk about these issues, understand where people's coming from, and how we can support each other moving forward. Just based on the recent conferences I've attended, there's quite a bit of, I call distrust and negativity. So I'd like to hopefully I can participate or strengthen the conversations to show its positive potential and how we can work together as a group to make sure that people become more acceptable AI and also more at ease. [00:36:38] Speaker B: This is KBcast, the voice of cyber. Thanks for tuning in. For more industry leading news and thought provoking articles, visit KBI Dot Media to get access today. This episode is brought to you by Mercksec, your smarter route to security talent. Mercksec's executive search has helped enterprise organizations find the right people from around the world since 2012. Their on demand Allen acquisition team helps startups and midsize businesses scale faster and more efficiently. Find out [email protected] today.

Other Episodes