April 05, 2024

00:40:26

Episode 253 Deep Dive: Mike Hanley | The Role of AI in Addressing Software Security Challenges

Episode 253 Deep Dive: Mike Hanley | The Role of AI in Addressing Software Security Challenges
KBKAST
Episode 253 Deep Dive: Mike Hanley | The Role of AI in Addressing Software Security Challenges

Apr 05 2024 | 00:40:26

/

Show Notes

Mike Hanley is the Chief Security Officer and SVP of Engineering at GitHub. Prior to GitHub, Mike was the Vice President of Security at Duo Security, where he built and led the security research, development, and operations functions. After Duo’s acquisition by Cisco for $2.35 billion in 2018, Mike led the transformation of Cisco’s cloud security framework and later served as CISO for the company. Mike also spent several years at CERT/CC as a Senior Member of the Technical Staff and security researcher focused on applied R&D programs for the US Department of Defense and the Intelligence Community. When he’s not talking about security at GitHub, Mike can be found enjoying Ann Arbor, MI with his wife and eight kids.

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: I think anytime you have a no in security as a security leader or practitioner, I think you have to step back and really look at why that happened. There probably were a number of missed opportunities to be on the same page about what's important to the organization from a security standpoint and what the business was trying to accomplish. So again, I think that all goes back to just communication. And I think a modern team in security, independent of AI, has to be communicating with their developers and figuring out how to work together about what's important and generally being on the same page. Because again, I think anytime you get to a no, there's usually something that was missed there. Well, before that. [00:00:41] Speaker B: This is KBCS. This is a primary target for ransomware. [00:00:46] Speaker A: Campaigns, security and testing and performance and. [00:00:49] Speaker C: Scalability, risk and compliance. We can actually automatically take that data and use it. Joining me today is Mike Hanley, chief security officer from GitHub. And today we're discussing how AI makes the promise of shift left a reality. So Mike, thanks for joining and welcome. I'm super excited to have this conversation with you, so I really want to get straight into it. [00:01:12] Speaker A: Yeah, happy to be here with you today. Thanks for the invitation to be on the show. [00:01:15] Speaker C: Now I'm curious. Okay, I wanna start with the phrase shift left now. I wanna start with this because, you know, depends on who you talk to. People's eyes seem to glaze over, or they seem to be, I don't know, even agitated by the term. So do you think in your experience that people just throw around the term shift left a lot? Because I'm seeing it, you know, obviously we run a media company seeing a lot just in like day to day articles as well. So what are your thoughts then on that? [00:01:42] Speaker A: Yeah, I think shift left really is one of those, uh, information security terms that's probably been overused in the last, you know, decade plus. Um, you could also probably bucket some other things in there, like zero trust, where they've meant a lot of different things to a lot of different people at various times. But I think if you pull back from that, what people have tried to communicate by saying shift left is the idea that you're giving helpful security feedback to developers as early as you can. Most of the time that's meant you have some tool that integrates with your CI and CD framework and you get feedback sometime after you've written your code through the test and the analysis process that occurs later. And that's largely been the best that the industry has been able to do for quite some time. But what is exciting that's happening now that we're on the front end of is AI is actually really going to completely redefine shifting left because if you think about it, rather than feedback at test time, which is after you finished whatever you were doing, whatever task you were doing, whatever project you were working on, now what we're actually saying is AI, and having your pair programmer right there with you, who's an expert on security and has the benefit of all the things that the model's been trained on, can actually give you real time security feedback as you go, when you're actually bringing the idea to code through your editor and that it doesn't get any further left than that in terms of shifting security left. So I'm really, really excited about that as a new horizon that we're getting into now. [00:03:16] Speaker C: Okay, that's really interesting. I love new horizons. So from just a human being point of view. So if we even think about the whole, you know, secure by design, originally everyone talking about this, and then obviously we got to dev sec ops or sec DevOps, depends on who you speak to. That was a bit of a change from mindset perspective. So now we're sort of talking on the AI front in terms of redefining the whole shift left approach. How do you think people are going to now respond to this? Because again, like, I'm always very optimistic. I love technology, so for me it's like, makes sense. But I'm curious then with your knowledge and your insight, do you think people would be more receptive towards this? Or again, it goes into that old mindset around, okay, well, I need to sort of approach this new paradigm completely different now. [00:04:01] Speaker A: Yeah, well, I'm with you, Chris. I tend to be an optimist about these kinds of things and what they can lead to from a productivity, from a security, from a experience standpoint, standpoint going forward. And I'm very bullish and very optimistic about what AI is going to do for security. But you raised a good point, which we see and we're seeing, and it's going to be one of the headwinds for AI to sort of manage through is a lot of people want to invent a new framework for assessing AI tools, or there's a lot of uncertainty about what the future looks like because this is a rapidly changing space. The advice that I've been giving people is a lot of tools that are built with AI in mind to help get some task done. You really already have a lot of the same vendor review processes, the same playbooks for thinking about do you trust the vendor? Do you trust the tool? Do you understand where your data is going, et cetera? A lot of that stuff you're already doing for third party vendor risk management. So I just try to remind people that just because AI and the experiences that we're building with AI are new doesn't mean that we have to completely reinvent how we think about assessing vendors and tools. Again, for example, if you're bringing in basically any other tool into your environment, one of the questions you're going to ask is who has access to my data? Where does it go? How is it used, how is it secured? You actually need to ask the same questions of vendors that are building tools that happen to be powered by AI as well. So I encourage people to use what's on the truck, if you will, from a process standpoint, to evaluate those opportunities and try to help people get over that initial well, what are all the things that could go wrong, which is obviously part of what you want to evaluate, but really focus on what can go right for your business from adopting AI, especially for a place like developer tools. And the numbers speak for themselves. I mean, we're seeing pretty phenomenal improvements, not just in developer productivity, but also from a security standpoint. And I think when people sort of start to weigh that out and actually look at those things, it helps have a more reasonable conversation, not one that's just based on uncertainty about what AI can bring in the future. [00:06:00] Speaker C: Okay, so there's a couple of things in there you said that I want to get into a little bit more when you said having a reasonable conversation. So what does reasonable look like to you? [00:06:08] Speaker A: I mean, I think reasonable to me looks like using a lot of the same plays that you, that you already probably use in your company. Right? Like your company has security policies in place. You have expectations on what type of data you trust, with what types of vendors. You ask for data flow diagrams. You seek to understand the security practices associated with the vendor that you're working with. We have standards. We have templated security questionnaires. The industry as a whole has a litany of these kinds of things that we can leverage to help assess the trustworthiness of a vendor or a tool and help us make an informed decision about whether we want to use them or not. The reality is that works well for tools powered by AI and well for tools that are not powered by AI. I think that can help you evaluate what, if anything, is unique about AI, both from a potential risk trade off, but also from a benefit standpoint to the organization and stay focused on where those trade offs are, not on reinventing a special process just because it happens to be AI. And the good news is, I think we see a lot of great resources that are being made available to help people understand that. Actually there are plenty of very successful initial use cases for AI that can help people prove out those benefits. And the existing third party like, vendor risk management type approach that we have is something that people can just standard and leverage off the truck as well. [00:07:29] Speaker C: Okay, so I want to go back a step. When you mentioned before sort of assessing vendors and tools, et cetera, and then asking the right questions like, well, who has access to the data? Is it secure? All of those types of things, you just, that makes sense from a security perspective. But for a, I don't want to say the word average developer, but from a development point of view, even if you go back historically, maybe those questions haven't been as engendered like a security practitioner. So are you starting to see a shift between developers asking those questions more and more? Like even myself going back even ten years ago, developers weren't really asking those questions. Nowadays it's becoming more prominent because we've got people like you and me sort of banging on about it a lot. But are you starting to see that coming through a little bit more in just in the day to day conversations? [00:08:13] Speaker A: Yeah, I mean, I think so. It depends on the organization and their size and what kind of functions that they have. But, you know, if I, if I'm talking to an average sort of mid size commercial customer, you know, they, they might have somebody who works in procurement, they might have a legal team involved. There's somebody in it who needs to run the tools, and then there's the developer who ultimately wants the experience. And what I find is there's a particular amount of, like the time to wow, if you will. Right? Like what's the, what's the sort of wow moment in terms of how you demonstrate the overwhelming benefit to the business? And a lot of the developers, when they can show numbers, for example, that they're writing code 55% faster, or that in certain languages, the AI tools might be helping them write 60% more of their code, or they're using AI to fix bugs that they didn't know how to fix before or didn't have expertise on or are getting better suggestions than what they got before. These are not small incremental improvements. These are very, very significant things, things that you can actually quantify both in terms of productivity. I mean, if you're writing code 55% faster. I mean, you can tie a dollar value to that, generally speaking. And I think that helps the developers have a conversation with the non technical folks who are involved in a procurement process, better understand what the upside to the business looks like. And I think one of the things that's key to building trust with AI, again, it's new, there's a lot of uncertainty, is also just being clear and transparent about what it does. So my recommendation to vendors out there that are building tools that are based on AI are, don't keep any secrets, just share. What are you doing? What kind of models are you using? How are they trained? How do you store data? What options are you giving customers in terms of do you store data or not? Or how do you use any data or telemetry that you get? And I think that's, with that working in the open, that responsibility that goes into being a pioneer in that space, is going to be key to helping build trust as well, especially for maybe people who aren't as bullish or optimistic about it as we are, or people who are just trying to run through these very standard business processes to help assess risk. Whatever the case is, I think the transparency there from the vendor community is very, very important to complementing the enthusiasm and then also the objective stats of the performance of the tools that you might get from the developers. [00:10:27] Speaker C: Yeah, that's an interesting point around building trust, because depends on who you ask. There is this conundrum around, oh, AI's bad, or no, it's good. Obviously we're optimistic on this interview. So just going back to that then for a moment, and you said, you know, being clear upfront, transparent about what it does, all of those types of things. So then I'm curious to know, why wouldn't someone be clear and transparent upfront? Or would you say that people perhaps just don't know those answers, so they just don't say anything? [00:10:53] Speaker A: Hub has been to lead with publishing everything that we've got and be clear about what we're doing and how we're doing it. But I think in other organizations it may simply be that not everybody necessarily understands. I mean, in 2023, if you look at all the new product offerings and new startups and sort of the really the advent of generative AI in a much more mainstream setting, right? I mean, everybody remembers the moment that they had their first experience with something like chat. GPT that was only a year ago. I mean, a little over a year ago, right? Maybe 15 months ago now. So the world has changed a lot in a very short period of time. So I think as people have tried to develop new experiences very quickly on top of some of these capabilities, it's probably the case that not every organization necessarily has taken sort of full stock of that. But, you know, my advice is for organizations that haven't done that, take the time to catch up, make sure that you understand what you're doing. Lead with that transparency. Again, I think it helps, especially for overcoming objections or at least having a open dialogue with some of these other teams that are focused on managing risks. But again, this is going to be a very fast moving space, not just right now, but for the next several years. I mean, if you look at how much has changed in, again, the last 15 months, and then you sort of project that forward, that you expect at least that rate of change, if not more. I'm expecting more personally. You know, we're all going to have to be racing to make sure that we keep up to date information about what's going on, not just keeping information out there, period. So that's definitely going to be a challenge overall in the space to continue to build trust and AI is keeping up with that pace of change. [00:12:27] Speaker C: But I guess you guys, you guys been in GitHub. I've sort of done that from the beginning. Right. So I guess you can, that's not necessarily an issue for you. But then I'm curious, going back to the trust side of it, because everyone talks about building trust, which is a hard thing to sort of answer. But as you were speaking, what was coming in my mind like was, well, you've been transparent from the beginning. People can access it, they can read it, they can absorb, they can, you know, they have all that information there. It's kind of like you're not hiding anything. So are you, would you say that because of that, that's what's enabled sort of the brand and the trust factor, as opposed to perhaps other tools out there that maybe aren't as transparent about things? And then if so, is it something that perhaps other companies can look to employ around the same sort of transparent strategy? [00:13:10] Speaker A: Yeah, I mean, absolutely. No, I absolutely think it's been a contributor to the success of Copilot. And I frequently go have meetings and sit down with customers to talk about GitHub, Copilot and how we're using it internally at GitHub or generally what capabilities they could bring to bear for their organizations. It's very, very common that I'm not just talking to my counterpart who might be running engineering or security at the customer. It's often you have folks there from the legal team and the procurement team and generally speaking, you know, we're, we're able to sort of get quickly to the, how can we help them? Because we've sort of already answered everything else that's out there. And I again, I think that's a great approach for people to just proactively take when they're building these new products is again lead with transparency, lead with, lead with what you're doing and how you're doing it. Get all those questions upfront and then that helps you very quickly get to the great. And now how can, how can we help you as an organization? [00:14:04] Speaker C: Now you made a comment before and you said what can go right with AI? So again we're optimistic. So I'm really keen to get into that because there is a lot of articles and doom and gloom and all that type of stuff out there. But you know, I'm very much a proponent of AI as a tool. It's your point before around 55% faster. Like those are huge numbers. That's not like one or 2%. Like that's a lot. So I'm then curious then to know like the benefits and from your perspective like what is going right at the moment. [00:14:34] Speaker A: Well I think the first real tool on the scene here that's gotten major traction has obviously been GitHub Copilot. And so we've been in a fortunate position to see what that rapid adoption looks like. And that's already now the most popular, most widely adopted AI developer tool out there. There's more than a million paid users using that tool already, but that's just one use case. I like what you said a minute ago, which is thinking about AI as a tool. Well if you have a tool belt or a toolbox, you have lots of different tools to do lots of different jobs. And for us, GitHub Copilot, the pair programmer, is just one of those opportunities. And we're also doing other exciting work, like for example in the security space. We recently released some enhancements to our code scanning tools that now use the power of AI to automatically suggest fixes for findings when the code scanning tool runs across your code. And this is really powerful because if you think about it, it's great that you're helping with your pair programmer as you're bringing the idea to code in the editor, but what about all that other code that you've written since the dawn of time and that didn't have the benefit of having an AI pair programmer over your shoulder or didn't have the benefit of whatever you've learned through your experience, or from the developer who no longer works at the company who wrote that code ten years ago? Well, the cool thing is now code scanning doesn't just tell you that you have a bug. It gives you a relevant, a well explained, a clear fix based on what it knows, not just about that bug, but the context around it. And that is really powerful if you think about helping organizations manage things like technical debt and vulnerabilities. If you don't just get better at writing new code, but also get better at fixing your old code, you see the advantages of this really start to compound quite quickly. And that's only just two things that we've talked about. I mean, what if it's also writing docs to describe your code faster? What if it's better summarizing your pull requests? What if it's giving you a narrative summary of the contents of a repo so that you don't need to poke around until you figure it out for yourself? I mean, all of these things are going to just completely transform the developer experience. And again, the neat thing is we're in such an early day and these are already such powerful use cases that that's again, part of why I'm so excited, not just about the overall productivity implications, but I do think that AI will fundamentally reshape how we think about security because the power and the benefit of it is so obvious even in these early days that it is worth figuring out any of the other challenges that come our way, because this is probably one of our best bets at figuring out the slew of technical debt that the Internet and all modern commercial entities and governments are built on. Right? Like, we know that in a lot of cases we're relying on systems that are a decade or decades old in many cases, and human powered refactoring isn't going to get us out of that problem. So I'm very, very excited about the opportunity for that. Happy, not just in the commercial space, but also in open source, where again, most of the code that we all depend on to some degree or another already exists. And the opportunity for AI to go also help creators, developers and maintainers in those spaces improve their projects and give them some of the support that they need there as well. [00:18:00] Speaker C: Okay, so the part that I want to press on a bit more, which was really interesting that you said around like old code. So, for example, I worked in organizations before. It's like, oh, there's literally one guy is the only guy that actually knows about this code. But what happens if that guy leaves? What happens if this guy gets sick? So it's more around key man risk or key woman risk. And then what you're saying means that there's built in redundancy then, because we don't need to just ask the same person that's worked there for 40 years, hey, like, what's going on with this? When, you know, you're talking about copilot, like, it can do it for you. Summarize things, explain things, make sense of the docs and those types of things. [00:18:38] Speaker A: Yeah, I mean, the interesting piece of what you're describing there is for so long, organizations relied on tenure and institutional knowledge and community knowledge within organizations about how things worked or how they were manufactured or who wrote the code. And I think what's interesting is when you think about the power of some of the AI, or generally just LLM backed technologies that we're seeing enter the market, they have the ability to synthesize some of that context much faster than you even sitting down with. Whoever wrote that code ten years ago could get on your own. And the ability to look at what's happening around you or the context in which you might be asking a question or even interact with the code in a way other than just clicking through the editor and going line by line. The idea that you can even ask a question in a natural language way through chat, which has obviously become one of the most popular experiences for people interacting with AI apps over the course of the last year, year and a quarter, this is a complete game changer, because you don't have to rely on that individual anymore. And the idea that you can just ask your assistant, your AI powered assistant directly, that I think is a big, big game changer. What's neat is this gives organizations an incredible amount of agility. If you're a security engineer coming in to do a review, and the feature team is working on something that they're really busy with and doesn't have time to necessarily sit down and take you through it all, well, great. You could just load up that repo, ask some questions about the code to get started from something like GitHub copilot chat, and then you probably have most of the context that you need to head off to the races with your review. That really is exciting to me as well, because it does give people fluidity. It provides for internal mobility. I think it increases the amount of access that people have to opportunities to be developers. It changes the game for getting into development in the first place. I mean, some people will learn to code now by asking natural language questions to a chatbot. That's phenomenal. And that's remarkably accessible for people with different learning styles where maybe the existing ways of getting into programming or becoming a developer didn't work well for them. So very, very bullish on what that's going to do, not just for teams, but just access for people generally who want to get into development and want to create. [00:20:54] Speaker C: Yeah, that's a good point. And that was something I was going to ask you about around skills. So you've obviously sort of summed it up around it's going to lower the barrier, barrier for entry. But then what, what are people out there then worried about? Because I've heard from multiple people, like they worried about AI, like taking jobs, but then, you know, again, it's the tool side of it. But then to your point, it means that we can potentially get more people in on this front. So what would be people's reservations then on that, on that side of things? [00:21:22] Speaker A: I think it's, you know, back to what we talked about a little earlier, where I think there's just uncertainty about what the future holds. And I think understandably some people come at uncertainty from a place that might take a glass half empty view of it, that you and I seem to be glass half full on this. But I understand everybody's kind of looking at those things from a different place. But my view on this, and I've seen this shared very broadly from a lot of other people, is AI will probably accelerate job creation and accelerate opportunity in places like development because again, the world at this point has been eaten by software thoroughly. And we know that even, for example, in security near and dear to us, the job shortage that's reported annually in places like the United States, where I am, is generally measured in the hundreds of thousands of unfilled cybersecurity job needs that exist out there. And we frequently point at a lack of qualified candidates or a skill shortage or a training shortage or whatever the excuse is that year when you see those reports, well, if it's now easier to become a security professional or if AI helps you with some of that security context or helps people get into the security field who wouldn't normally have had access to it or wouldn't normally have had a pathway or an interest to get into that, then that's great, because it's going to help more people get into those spaces who weren't there before. So my view is, again, it's another way for people to get into development. But it's also creating new work and new opportunities for people to solve problems that will emerge as well, just as AI continues to grow. So we mentioned a minute ago the legacy code, the decades worth of code that the world is built on, challenges that we haven't addressed yet, but that we will eventually need to get to. Are things like what are we going to do about moving off of a language like CObOL? Or what are we going to do about the unmaintained open source projects that are running core parts of the Internet? I mean, these are interesting questions that AI and communities and the public sector and the private sector and academia are going to need to come together to figure out. But AI is the common element of that. That I think is going to be a force multiplier that wasn't in the picture a few years ago that is now that can help us manage some of those problems at the scale of things, like some of the core open source components that could benefit from this, that power everything from your Tesla to your refrigerator to your iPhone. [00:23:40] Speaker C: So I want to switch gears just subtly and maybe let's talk through, again, benefits and your thoughts on how sort of AI can then be leveraged for secure software development? I think the skills one is massive. Again, that's something that's coming up a lot in my interviews now, but for how you've answered that, I think that's the way people need to be thinking along those lines. So I guess it's listening to people like yourself to actually explain like this is actually going to enhance people and all types of people getting into the field. So now I'm keen to hear more on, yeah. The secure software development side of things. [00:24:12] Speaker A: Yeah, I mean, you know, and to go back to that on the skills piece for just a brief moment, I mean, security is a big discipline. Like, I don't know anybody who's good at everything in security. I mean, it's just too broad of a, too broad of a space. We have specialties, we have areas that we have experience within that. You know, some people have spent time in cryptography, some people have spent time in, you know, as a pen tester, some people have spent time in security design and UX research. And all of this kind of comes together to be the broader landscape that's security. But I haven't met anybody yet who's good at all of it. It's just too big of a space. But again, this is a benefit where if you have some reference or context in some areas, AI can certainly help you with filling in a lot of the rest. Or it can be a tool to help supplement where you may not have that experiential knowledge per se, but to get to the SDL or the secure software development lifecycle piece in a little bit more detail. I think the prior point is relevant because the job skill shortage is one of those classic things that we point to. As we say, well, there's not enough security people to do the reviews or sit over your shoulder and watch developers while they work. I would assert we actually don't want that at all. And I don't think developers want somebody shoulder surfing them from the security team the entire time that they're trying to work. I think everybody, the developers generally speaking, want good security outcomes, but as an industry we have failed to put them consistently in places that work best for developers. If you think about your average security tooling experience today, and I'll call back to the shift left conversation we had at the beginning, it's not easy for developers to interact with security tooling. Most of the time, security experiences are not designed with developers in mind. They have to get out of whatever they're doing, go to some system that the security team runs that they weren't involved in selecting or configuring, get some findings and reports, go figure out how to deal with that the best they can. Then they've got a red team report coming back saying that they've got 15 vulnerabilities that they need to fix. And we're constantly asking people to react to security information. And I don't think that this is a great experience for developers. It's where we are as an industry, but I don't think it's a great experience for developers. But the AI powered experience of more continuously embedding security feedback in every single part of the experience and doing so with great context, that's where the real power is. So if you think about we mentioned something like GitHub Copilot, we're able to give you the feedback about what you're doing in the moment. That's not something that any security team is really equipped to do anywhere today. So that's already a big game changer. There just aren't shops that have people shoulder surfing you all the time, every developer while you're doing your work, that's thing one, the security review process. Typically you run scanners and you get back results and findings and then as a developer you need to sort through all that noise. Well, if the AI is just simply telling you, hey, we found ten things. We automatically fixed six of them. These four you need to take a look at. We have good suggestions for three of them that are accepted 98% of the time when we give them. And then on this last one we think this will work, try it, and if not, we'll give you the next suggestion that will completely reimagine that entire test experience because you're giving all the context and you're explaining the bugs and the fixes based on the rest of what the developer is doing. That's a dream experience. I feel like forgetting security feedback at that stage. Then when you think forward to then operating whatever it is that you built, we're already seeing opportunities to integrate skills or ecosystem extensions where AI can help recognize what's happening in an environment and suggest reactions to it. So I think this idea that the incident response loop will potentially get tightened up pretty significantly as well through AI, that will also be a very big game changer. So I see really across the whole spectrum of things, there's huge opportunity for improvement, right? Like you're saying better signal at the onset when somebody's writing something, you have the potential to simplify something like vulnerability management, which is traditionally just really hard for a lot of organizations to get right. You're giving much more clear, crisp feedback about security findings and even suggesting, if not automatically just fixing them for people. And then you're helping assist with the sort of deploy, operate and incident respond pieces of that as well. That's a pretty big set of changes across the board and those are just, you know, a few examples that we've touched on. So I'm actually really excited about that. Security teams are going to obviously need to adapt to that as well and think about where that shifts their attention and focus. But that is a great problem to have because organizations just don't get that kind of coverage today across the SDLC, even the most well resourced organizations would struggle to get close to that. [00:28:59] Speaker C: That's an excellent response because what was coming in my mind, going back to the shoulder surfing, which is really annoying, and I can talk about this because I've worked in a penetration testing team internally myself and these are some of the issues that we used to run into because then it kind of feels like, oh, like you're policing us and, you know, culturally it creates a problem. With that in mind, would you start to see the culture shifting? Because again, like, we don't need to go and ask someone question when it could, you know, you can ask it yourself and then therefore you remove me removing that sort of embarrassment of asking someone a question when you. Because you don't really know. You know, there's been multiple cases of people just never really asking stuff because they never wanted to feel embarrassed because they didn't know something. Would you start to see the culture within security teams? Let's just focus it internally for a moment because now that's being removed and the whole shoulder surfing thing just won't be a thing. Moving forward with the, with leveraging AI. [00:29:56] Speaker A: Yeah, I mean, if you look like 1015 years ago, right. I think a lot of security teams had the very much the guns, gates and guards approach to sort of more traditional views of security. Right. Like with the department of no, if you will. And I think especially in the last ten years, my hope is that more and more security teams have moved to being the department of yes and right. Where they are more proactively engaged with the business, trying to help them get to the outcomes that they want to achieve. And you can, by the way, you can manage risk while also being approachable and being accessible to other teams, not just developers, finance, legal team, you name it. Like the security team should be seen as a resource for sure to be helping with all these problems. But especially with, you know, the advent of AI tooling, I think it's even more important that the developer, or that security teams in particular think about their role as a business enabler, because you can't have what you just said happen where people don't go to the security team. Because while AI is an extremely powerful tool that's going to continue to present again in a rapidly moving setting and context, it's going to continue to represent an opportunity for the security team developer teams to actually work more closely together and communicate more frequently because it's going to solve some problems and it's going to free up time to focus on things that most security teams haven't had time to focus on in the past, or it's going to present opportunities for security teams to learn new tools themselves or new skills themselves or adapt to having AI in their own workflows. So I think that that idea that the culture of the security team needs to be transparent, needs to be focused on communication and being clear about what the priorities are of the business and working with other teams that like, my hope is most teams were moving in that direction anyway in the course of the last ten years. But because the pace of development is going to just radically accelerate with AI, like the security teams that don't adapt to that are going to actually probably make their organization struggle more because they will be more out of tune and more out of touch with how the rest of the business is trying to operate in an AI first place. And I think this is one of those things where like, we won't go back to where we were previously. I mean, AI and adopting AI tools, not, not just for developers, but just generally, we won't go back to a day where that's not happening because it's such a differentiator from a productivity standpoint. And in a competitive economy where there's lots of innovation happening, you need to be able to compete in this space. I think it's a matter of time as most organizations think about what the place is for them to adopt it and how they do it and at what pace. But that's going to necessitate, again, that change from a security mindset, because if the organization takes off in one direction with AI and the security teams not figuring out how to do it in a way that's safe and that manages the risk appetite of the organization, they risk getting left behind. And then once they've adopted it, if they don't stay in communication with their developers, they're not going to be able to figure out how to continue to best meet their needs. Because again, if you look at how much the space has changed just in the last 15 months since chat GPT came on the scene, if you're not talking to your developers or you're not talking to the other consumers of AI tooling in your organization, you're not going to be able to keep up with the rapid pace of change, the new opportunities that they're going to want to see, and ultimately that's bad business. I think it's really, really important that security teams are in touch with communicating with, accessible to and resources to ultimately their teammates and other functions in the business. [00:33:25] Speaker C: Yeah, that definitely makes sense. I think maybe my question was around just historically there being bad blood's not the way to position it, but just in my experience of people being less inclined to want to speak to us because they kind of felt like we were helicoptering them and we were the police and telling them, no, you can't do that. And maybe there was the, the dynamic, I think will then change over time, perhaps so it's probably more from that point of view. Will you start to see that more receptive then between those two teams? [00:33:54] Speaker A: Sure. And again, my hope, I think, is that that was changing already. But I think, I think anytime you have a no in security as a security leader or practitioner, I think you have to step back and really look at why that happened, because if you have to say no or block a deployment or stop something from shipping, generally in that situation there probably were a number of missed opportunities to be on the same page about what's important to the organization from a security standpoint and what the business was trying to accomplish. When I reflect on the times that I've been in situations like that, they are always missed opportunities to be in a shared understanding about what's important to the so I think when it comes to developers don't want helicopters around them watching everything that they're doing, it's like, yeah, well, really what you're saying is how do we make a great developer experience? Because if you're a software developer, you want to develop software and you probably came to work to create something and nobody's comfortable when they're being hovered. But security can solve problems like this with thoughtful design. And part of thoughtful design is talking to the developers about, hey, we're here from the security team. Here's what's important to us, here's the risks that we're trying to manage. I want to learn more about what you do and the tools that you use and how you work so that I can manage these risks to the business and also help you be a happy developer. And what's interesting is, like, again, when you look at what we're already seeing with the tools that are out there today, I mean, the feedback on things like GitHub Copilot is not just that it's making them more productive or helping them get to better security outcomes, but they're also happier. Like developers will actually say that they are happier in their jobs as a result of having access to GitHub Copilot being their AI pair programmer. So back to your question, if you can have better security, if you can have better developer productivity and everybody's happier about it, and this is a product of communicating about what's important to everybody and how the tools fit together so that everybody gets done what they need from their respective stakeholder positions. Like this is a great outcome because it means you're all on the same page and everybody's rowing the boat in the same direction. So again, I think that all goes back to just communication. And I think a modern team in security independent of AI has to be communicating with their developers and figuring out how to work together and be communicating together about what's important and generally being on the same page. Because again, I think anytime you get to a no there's usually something that was missed there well before that. [00:36:22] Speaker C: So in terms of moving forward, do you have any sort of hypothesis around even the next twelve months, as you mentioned before, like things are, you know, increasing at a rapid speed in the last 15 months we've seen massive change, even with chat GBT emerging, being more ubiquitous. What are your thoughts then? Like even in twelve months we come back, we do another interview in the whole the AI game, and then the shift left, just purely based on your experience and any sort of insights that you have just to leave our audience with today. [00:36:51] Speaker A: Yeah, I mean, I think the grand challenges that I'm interested in are some of the at scale problems that AI will lend itself to being more helpful to us than solutions that we've had in the past. And an example that I frequently point to is DARPA announced at Blackhat this past summer here in the States in August that they were going to have their AI cyber grand challenge. And looking at how AI models can help solve large problems in software security, like for example, how do you help make a big dent in the corpus of technical debt that exists in open source software that we all depend on, I'm excited to see what comes from things like that, where we're taking big bets at, okay, we've created these great individual developer experiences in tools, but what if we stomp out an entire class of problems through a bigger industry effort? That would be really interesting and exciting. And I think if you project forward from what we know over the course of the last, say, two years, where we've seen advancement in models, we're seeing lots of specialized models coming out now that are small and particular purposes, we're seeing more and more options become available, we're seeing more and more tools become available. I think a year from now, I would love to see some actual meaningful progress where we see some real breakthrough opportunities to fix some of the bigger software security challenges at scale. Whether it's fixing a bunch of projects, whether it's a dramatic improvement again in a pair programmer like copilot, through model advancement or through advancements in the experiences. I do think by the time, if we have this conversation again a year from now, I do think we'll have made meaningful progress on some of those items, and I'm really excited to see which ones come to fruition. But again, when you see things like DARPA putting a challenge forth and we say, hey, come solve a really meaty problem with AI, the pace of progress that's happening in the industry right now suggests that we will probably see some solutions to some of these big problems in the next few years, if not the next few quarters. And I think there's never been a better time to be in security because buckle up. It's going to be exciting next couple of months and next couple of years for all of us. [00:39:01] Speaker C: So, Mike, really quickly, do you have any closing comments or final thoughts? [00:39:04] Speaker A: Yeah, I think my advice is if your organization is asking about AI today, and what should you be looking at? I think just be open. Go learn. Talk to people who are in the space, who are already doing it. But ultimately, talk to your lawyers, talk to your it practitioners, talk to your finance folks. Figure out what's important to your organization. Try to find that opportunity where you might be able to show how AI can help make your organization better. Specifically, if you're listening to this podcast and you're in security, finding some opportunities to make your developers happier and to make your security theme happier. There's plenty of wins, I think, that are at the intersection of that and experimenting with those, I think could be a great project for you in 2024. [00:39:45] Speaker B: This is KVCast, the voice of cyber. [00:39:49] Speaker C: Thanks for tuning in. For more industry leading news and thought provoking articles, visit KBI Media to get access today. [00:39:58] Speaker B: This episode is brought to you by MercSec, your smarter route to security talent. MercSec's executive search has helped enterprise organizations find the right people from around the world since 2012. Their on demand talent acquisition team helps startups and mid sized businesses scale faster and more efficiently. Sign out [email protected] today.

Other Episodes