February 18, 2026

00:52:55

Episode 355 Deep Dive: Sam Cummings | Will we see current LLM technology reach it's limits in 2026?

Episode 355 Deep Dive: Sam Cummings | Will we see current LLM technology reach it's limits in 2026?
KBKAST
Episode 355 Deep Dive: Sam Cummings | Will we see current LLM technology reach it's limits in 2026?

Feb 18 2026 | 00:52:55

/

Show Notes

Samuel J. Cummings III is an award-winning data scientist, keynote speaker, and renowned thought leader in AI, specializing in complex reasoning and memory architecture. In his recent work he has created AI model architecture that runs 94% less tokens than standard LLMs. As Director of Education at Gen AI Works, Sam brings over a decade of expertise in AI and runs a podcast called Gen AI Talks.

In this episode, we sit down with Sam Cummings, Director of Education at Gen AI Works, as he explores the current and future landscape of large language models (LLMs) and their impact on cybersecurity. Sam unpacks the technical and economic limitations of LLMs, highlighting issues such as model cost, scalability, hallucination, and the looming challenges around reasoning and memory management. The conversation delves into the shift from universal LLMs to specialized models, the inevitability of market monopolization by big tech firms, and the environmental cost of massive data centers. Sam also paints a vivid picture of the “arms race” in the cybersecurity sector, predicting a boom in both offensive and defensive capabilities powered by AI, and offers actionable insights for professionals and entrepreneurs looking to thrive in this rapidly-evolving environment.

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Large language models are talking models. So when you work with it, it has to essentially speak in its own way. Whether that's writing text or creating your next email. It doesn't have an understanding. And so you get these hyperbolic performances where it scales. And that's the big thing that really, from a physics perspective, is underwriting this problem. If I want to do reasoning today, the more complex it's going to be, the more I want it to think, the more I'm stuffing into that text box. That's the reality of this problem. Because the solution of talking your way to understand has its functional limits. [00:00:45] Speaker B: This is katiecast as a primary target. [00:00:49] Speaker A: For ransomware campaigns, security and testing, and performance and scalability. [00:00:54] Speaker C: We can actually automate that, take that data and use it. Joining me now is Sam Cummings, Director of education at Gen AI Works. And today we're discussing if we will see current LLM technology reach its limits in 2026, and if so, what's next. So, Sam, thanks for joining me, man, and welcome. [00:01:17] Speaker A: Honored to be here. Shout out to the entire audience. I've seen so many other great videos before conversation before on the KBI Media channel. So I'm excited to be here. [00:01:27] Speaker C: And you know, just for a quick context, I met Sam at the Oracle AI World and I thought, this dude is such a high energy dude, we're going to be friends. So here we are doing the podcast. So I really want to start now. You've, you've got a lot of a cool background, you've got a lot going on and I want to bring on the show because you have a little bit different perspective on certain things and I like your approach and your thinking. It's very modern. Okay, so Sam, I just need to ask straight off the bat, like, do you think that we will see LLMs reach its limit? [00:01:59] Speaker A: The question of the decade? Well, I'll give a little bit of background. My experience coming into this space actually starts from a space that we might all be a part of, but might not know. We've all actually participated and that is this industry called customer success. Why is this important as we all are customers of products and services all throughout our life, the ability for companies to engage people and really drive that experience has evolved a ton over the last 40 years. The idea of selling software, something real, you know, timeless at this point. We've all been in the era of software for decades, but to be able to do that in a way that has a subscription has really changed how we do commerce across the globe. Whether it's your Cell phone, whether it's, I'm not sure if you all are watch listening, you know, use Netflix or any other types of services. So many of the services we use today have subscription. That created this boom in industry called customer success, which is since you have a subscription, that you're going to be paying every month or every year, I have to make sure you're happy, I have to make sure you're going to continue to do business with us. And so that's created this demand and this pressure on businesses to create automation and primarily be able to engage people in personal ways using data. That initial impetus, that initial goal has transformed how we do business. Where companies monitor how you engage with their products, how you speak about their brands and they incorporate that and how they communicate. Where that ties to LLMs is that before the boom of 2022 when ChatGPT came out, many of the research and technology around reasoning how do we store memory have been being tackled by the marketing and customer success spaces for years prior. So I have a great perspective to share with you all. I watched an industry completely shift multiple times and the good news is we're right in the front of one of those now. [00:04:02] Speaker C: Yeah, okay, this is really interesting. So you mentioned before, and you're right. Back in the day, like we didn't have sort of subscription and so now these businesses, Netflix and Friends, like obviously they've got to keep stuff coming out to keep the subscription, you know, the subscription coming in, the money coming in. Right. So would you say that because of the automation, people got to ship stuff faster than ever before that we've seen, for example, do you think because of that mentality of we've got to keep, you know, our customers who are paying us each month because it is month to month, it's really created a bit of a, an interesting sort of times when we're skirting around security, we're skirting around doing things perhaps the right way in order to get stuff faster. Because if we don't like we're already seeing now with AI and all this other stuff coming into the fold, it has meant that people are dislodging their competition and they're just leaving them in the dust. Seeing that happen, what are then your thoughts on this? A little bit more. Yeah. [00:04:58] Speaker A: This is a opportunity for everyone. Specifically insecurity. Now going from where I shared reasoning models, the idea of being able to process data and make decisions has been brewing before ChatGPT released what ChatGPT really opened up in tools like that, the large language models is the fluidity of those use cases, I could now process large corpus of information and turn that into additional insights that I can use without having to use what in the past would be heuristic rules. Meaning if they use the word happy, then use that to mark their sentiment as positive they use. I'm frustrated. These were fragile systems. Large language models are more flexible in how they're able to label, how they're able to find insight and perspective, but they hallucinate. And so what this has meant, I think, for security perspective is there's a whole suite of security use cases that were once hard, once fragile, once heuristic, that now have more fluid LLM use cases. We've seen it in security footage labeling where I can have a security camera monitoring and then have the system label the things in the image. Well, before that would have took a lot of engineering. I would have to create models and structures for that. But today I might just use a reasoning model. But here's where, you know, it gets interesting. The costing is the main barrier in the application of modern AI and security. Because if I wanted to make more monitoring security services, the amount of data I have to pass through a large language model makes most use cases not viable, cost wise. And what I'm excited to share with you and really bring you into the world is what does that mean in a world where that cost doesn't exist? Security use cases that would not be viable or not feasible are now in the game. [00:06:59] Speaker C: Okay, so hang on, just before we move on, what do you believe the limit is? How we hit the limit? Like what? Where's the limit? What are we dealing with here? [00:07:08] Speaker A: So the problem has layers to it. So there's lots to dig into and would love to kind of bring the audience into the initial shore of that broader ocean of conversation. The key is the model's ability to reason and have memory dictate its capabilities. And when we talk about AGI, like as a concept, like artificial general intelligence, it's the ability to tackle a lot of different scenarios with the ability to reason and have memory so that you can solve problems. But in the problem of modeling today, there's a cap that has to do with how much context can it reasonably manage at one time. Couple that with the fact that it costs you money to reason. So as these models generate what they think, how they process things, there's a token cost to that. Where I'm paying a model to process a job, whether that's review a picture, review a text, you know, we've all used an email rewrite in ChatGPT, it's those tasks that cost today. So if I'm doing security use case, let's just say I want to monitor my servers consistently every day for traffic. Do I run a ChatGPT model every day across all of my server traffic and behavior? That would be so costly you couldn't afford to run it all the time. So there's certain limits that when we think about like cost are really associated with that problem of reasoning. And LLMs being at their limits means there's not much more we can do to make in a general way them less costly because the broader market is building data centers all over the world and they're looking to make it. So from a consumer you could just send more information. Here's the catch. More context doesn't mean better performance because as the models get larger and larger, they hallucinate more, they get confused in the middle of the job or more. And so these are real pressures that we can't put more data into this. Like ChatGPT can't get two times the more Internet data. So there's not like we can data our way out of this. And so the way that most companies are tackling it, what you're going to see is specific use case models, meaning they can better code. So we have a model that can code better. We might have a model that can better create, you know, processes and system design specs. These are specific jobs, but universal ability for these models to perform better. We're at a point where like the current architecture of LLMs is not the way that we can really get magnified gains. And a little bit shortly here I'll share some of the other ways we will. [00:09:54] Speaker C: So what someone described to me recently was what you're saying just to give it paint a bit of a picture. It's like if you take an image and you photocopy it and then you photocopy and I can't leave them saying photocopy, but photocopy the photo and eventually it just becomes really bad. And that's sort of a parallel to what you said before around the hallucinations with like LLMs, right? So don't people assume that if there's more data being ingested, therefore we're going to get better results. But then when you look at even the media stuff that I'm doing, I've been reading a lot more, it's like, well, actually ChatGPT and friends need companies like ours to keep going because then we're the ones actually manually doing the Fact checking, right? Because if you're trying to create something and like the photocopy example of something that's fabricated or doesn't make sense, it's just going to get worse. So how help me sort of, I want to understand this a little bit more because perhaps people are confused around, well, we've got more data, therefore more data means we're going to get better answers. [00:11:00] Speaker A: An age old problem. Now where we can move from like the core framing of this is think about it a little bit differently. Large language models are talking models, they talk things out. So when you work with it, it has to essentially speak in its own way. Whether that's writing text or creating, you know, your next email, it doesn't have an understanding. So as the nature of it, it's a blind, what they call stateless approach. Meaning when I post something in ChatGPT, it takes whatever it has there and then, you know, answers me from that one moment. When I use tools like ChatGPT and I toggle something like deep reasoning mode, what it's doing is it's thinking and then taking the output from what that thought is and re importing it back in with the new step. And so you get these hyperbolic performances where what happens is to do two steps of work, it's about four times the effort, but then when you go to six steps it's eight times, when you go to eight steps it's 16. So it scales. And that's the big thing that really from a physics perspective is underwriting this problem. If I want to do reasoning today, the more complex it's going to be, the more I want it to think, the more I'm stuffing into that text box. So just imagine if you posted a question to ChatGPT and you took everything it said to you and what you want to say next and you repost that back in. How long does that have to get before it's too long for the model to process it? That's the reality of this problem. Because the solution of talking your way to understand has its functional limits. What you need instead is a world building model. These are models that work fundamentally different in that they're not just talking, they're trying to understand that world and visualize it. So they're thinking about what this should be or what could come next. And that opens a whole new paradigm in reasoning that again we're going to see whether it's across manufacturing, whether it's robotics, whether it's self driving cars, it's these models that will have the ability for us to go further, indirectly impact security and what's possible, you raise a. [00:13:25] Speaker C: Good point around, you know, how long does it have to get to? So then how long would it have to get to? Because I've noticed and I've got the premium version of ChatGPT. It does say, oh, but like, it doesn't say this, but it contextualizes. Like 4 weeks ago I asked it something and it brings that forward into the answer. Right. So then will that get to a point where it just can't handle anymore, or there's no infinite scale there or what's going on? [00:13:51] Speaker A: Yeah, we're currently at that point. And so there is a lot of breakthrough that has occurred already. We have done some amazing things to be able to make it so you can actually upload entire documents into something like ChatGPT. Those gains mean that it can understand a lot of things, but the ability for it to know what of those things it has in memory are relevant, that's its own burgeoning orchestration layer that's been evolving. And we, we think about that from a simple of layers of the cake. There's the underlying model itself. Today's mindset is that the underlying model itself manages context and it manages its memory. When you separate those out and you separate out reasoning and memory from the underlying model, that's where today you're able to really move the needle. Because the system being able to have to manage all these things in context is the real root of hallucination. Hallucination is not a bug, it's a feature. Because in the creation of language, it's a mix of randomness and order and callbacks that make text or any kind of prose relevant. You know, the colloquialisms when you read Shakespeare, if you don't have that time, context, some of those jokes that they're making don't have any relevancy to you. So there's that ability for a model to do callbacks and then also to have an understanding of what's going on, that if the model itself is the place where you do reasoning and memory management, you're always going to run into a limit. And this is the same thing we've seen in computer graphics, where there's graphics cards and then there's your cpu. And that separation is the fundamental revolution that I see happening when I say the limits, the limits of being able to run your computer without having a separate graphics card, or in this scenario, do reasoning without a separate orchestration layer of reasoning and memory. [00:15:56] Speaker C: Okay, so then what Happens now as of today, or this interview with ChatGPT or OpenAI more specifically, like what is the go then with these, with these guys? [00:16:07] Speaker A: Yeah, it's fun. It's fun. So the money burn is going to go on. So by nature, most of these kind of approaches lead to one thing, buy bigger data centers and so we have bigger model optimization coming. But if you look at what's happened even recently, you think that if ChatGPT had something that was fundamentally changed the game, they would be holding onto it right now? I think not. If you look at what they did with ChatGPT's 5.2, it was an optimization release, meaning the major feature was that it could, based on what you type in, it can figure out which model it should use. Should it do heavy thinking, should it think in a small amount? If they had the ability for us to blow it out the water, they would. And I'm not saying that there's not some ammo in the tank. There are some upgrades and updates that we're going to see that are still compared to 5 years ago Sci fi. But when we look at like what is possible, they're going to be continually chasing the profitability that will never hit. There's not enough revenue in the market to make up the costing that ChatGPT has today under the current architecture. But here's where it shifts if the cost of reasoning goes down. Now people use these things differently. And so this is where when we're sitting at like where the innovation happens is the ability for us to make it so that similar to electricity right now we have direct current, meaning when things reason, it reasons in one direction. It's just thinking that takes that result, puts it back in thinking, thinking this is direct current equivalent to electricity. What changed our universe, changed our world, our ability to have the modern world was alternating current. And the way that the reasoning models that I've been working with at the forefront work is actually reasons, then compresses and then re injects. So this core idea, think of it like a piston, where it reasons a little bit, compresses this idea and understanding and then injects callbacks and memory as it needs to do the next. This is called a stateful engine, meaning every state is an individual moment where it knows where it is in that process. What this unlocks is use cases now where just like how alternating current made the ability for me to send electricity from one part of the state all the way down to someone's home, I can now reason for long periods of time with Lower cost. That's going to explode the security industry because use cases that were too costly, like having a large language model read footage from a live stream consistently and annotate what it sees. It would have been too costly. Now you can do it. Having bug review tools that can review your entire code base every day, five to seven times a day, that would cost way too much. So now with that burden gone, that's where these companies become profitable. So there's a battle to be had. Will we get there without some pain? I think not. But the place we arrived to is the same place we arrived to electricity. We have cheap, long reasoning capability that then unlocks a whole new world of functionality. [00:19:24] Speaker C: So with OpenAI, do you have any numbers around how much it's costing them to run this capability? Like, and the other thing is like water, right? Apparently it just takes a lot of water as well that I don't think people factor in. So I'm just really curious. And then are they trying to your point before, at the cost of reasoning, are they now heavily focused on how to get this down as well? [00:19:50] Speaker A: So there are no market factors to lower the cost of reasoning today? Primarily it's the idea of the typical kind of what we've had over the last series of innovations in software where it's reached profitability after the fact. Market share is the game. So as a core gameplay, the way that the market's going to play is the big players are going to stick around because they could just burn cash longer. The profitability capability of what looks like in a world today is I am going to essentially build all the data centers so I have the monopoly. From that monopoly, then I can focus on additional optimizations. And that is not good for consumers generally. It is not good for the environment because huge data centers that are very lossy, meaning they burn a lot of energy and they consume a lot of water, is not a good recipe. But what is the undercurrent to that is you're going to see breakthroughs happening like the ones that I've been able to see in real time. I recently announced with Google and another company called Fetch AI my work on reasoning. And like I shared with you, the ability to have cheap, long, deep reasoning that can run is something that is not going to come from those big players, it's going to come from smaller players and then the optimization of individual tasks. There will never be a moment where the money is available for ChatGPT to be profitable in that current state. But once these innovations take place and those pressures Take place of smaller projects making it cheaper and cheaper. That's going to drive down the opportunity for many people to create develop and again I'm, I would I invest in ChatGPT today. Should I have stocks in it? Of course it's a. The ship is huge but. And it's too big to fail in a lot of ways because everything that we use today is the same way. Amazon is for infrastructure meaning everybody who has a chat GPT based tool, whether it's reading a video, you know, taking a picture or reading text, it all goes back to their credits on their token system of tracking how many tokens have gone through. But that is not going to have enough monetary money. They would have to do multiple trillions in revenue to make up for those costs. Imagine this is just one company of the multiple. The main company that can burn money like that for now is Google is the Amazons is the ability for what we're seeing with OpenAI that is not going to change. So I expect more of the same like additional core goal of market share being the goal to capture over environmental impact or even the ability for it to be a more efficient process. [00:22:36] Speaker C: Okay, I want to get into this a little bit more and I don't want to miss anything. Okay. So do you believe so will this displace OpenAI or they're just too big now they got the market share they don't care if they burn heaps of money, who cares? Because eventually they'll figure out a plan. But do you see Open AI as like the shark and then you see these smaller players will come up and then there'll be smaller fish that swim beside OpenAI. But perhaps they're then building their own little profit centers. Perhaps, but they're still feeding off OpenAI for as as an example will they become displaced or and do you think Sam Altman thought I'm going to get to a point where we can't go beyond this or like I'm, I'm curious to hear your thoughts on this one Sam. [00:23:22] Speaker A: Yeah, so there's multiple layers to this one here. The acquisition marketplace is what we've already seen. So ChatGPT is not OpenAI, they're Microsoft. And when you think about what happened with OpenAI it's the same thing happened with LinkedIn. Is LinkedIn still its own entity? Yes, but it's as far as like a property. But it's Microsoft under the hood and that is I think so instrumental for us to understand we have a consolidation. We are in the middle of a monopolization era in civilization that's been unmatched in hundreds of years. And that's important to understand across all industries, whether it's our food, every restaurant gets from the same food providers. When it comes to media, there's like top conglomerates, whether it's, you know, across the board that run most media companies. This is not like unique to this space. So even if the players on the field change, the owners are going to be the same. We are going to see that the consolidation of costing, meaning the ability to build these data centers today is there's so many backups we have today, there's more data centers earmarked than we can build fast enough. And so if you look at like what that trend means, there is always going to be pressure on the market that the big players are just going to acquire whoever comes up. But here's the big thing in a real chart that anyone here can look to to really get some guidance. What is the market share of token consumption that is private versus through these public processes? And what I mean by it is like when you work with ChatGPT, it takes tokens, meaning context of the actual text to do the job. So let's just say you wanted to write an email for you. Let's say that cost 5,000 token. How many of those tokens in the world of people doing those kind of work would go through the pipes of ChatGPT, Google and Amazon and Claude? How many, how much goes through those versus how many might go through local models or models that aren't in that big architecture? And that's where we're going to see that the market share plan that all the bigger cloud companies are banking on is that most traffic will go through the cloud. What I argue will happen is as we get the innovations we're talking about, models can run cheaper, they can run locally, they can perform specific tasks better. The global amount of token consumption is going to be more local than it will be cloud. And that's where the market does feel pain. And this is the anti output or the anti outcome to the business plans of the open AIs. They want the world of token consumption to exist within the cloud architecture of theirs. But this is the extra hurdle. The LLMs era again is now at its peak and is at its middle. We're seeing the rise of the world models era. Nvidia, just announced as CES Cosmos. This is a world foundational model, the same way we have language models. These world models will be the substrate, the core engine that powers smart cars, robots and the ability for us to have mechanized manufacturing at a newer scale. So if you think of like that whole Surface area, that's where we're going to see a big place of the next emergence. Now, is it going to be Microsoft that acquires those companies that make that? Is it Amazon? The owners will be the same, but we got some players on the field. They're going to come up. They're going to be pretty epic over the coming years. [00:26:56] Speaker C: Handling sensitive health data you already know security and compliance aren't optional. Whether it's ISO 27001 SoC2 or GDPR, Vanta helps you build trust while staying focused on patient outcomes. Their platform automates up to 90% of the work so you can hit your compliance goals faster and scale safely. Visit vanta.com kbcast that's V A N T A.com forward/kbcast to learn more. Well, what do you think's gonna happen now? So, and I've been hearing this a bit in the security space as well, like big players, some will come up, they're just a quiet. I mean it's gonna get to a point where there's gonna be like 10 big players and you've got all the other ones underneath them that have their subset of some company but runs off the back of them. What do you realistically think is happening now with these businesses? And the other thing that sort of bothered me a little bit over the years is people like, oh, I just want to build a company and then I just want to get it acquired. So it's like, yes, I understand eventually there has to be some exit. But I do believe having that model sort of also creates a lack of innovation. Because if you're just going to build a company to feed it to the big players, then you're sort of just, you're not thinking a little bit differently and you're not really innovation as a result of that. So what are, what are your thoughts on this? [00:28:19] Speaker A: It's unfortunate, but that's the, that's where we are going to be for the foreseeable future. The good news is from a someone who wants to invest or maybe somewhere where do I put my energy in time to be successful of this? You will not miss this coming. So an example is we are exactly where we were with the Internet architecture era, right? Right now there's a million apps, but there's only three architecture providers. You're either a Google stack, you're a Microsoft or Linux stack. In some cases, let's just say Linux or you're An Amazon stack, your app is built in one of those architectures. So what that means is the same way today the whole Internet has like three or four providers, all of like the casino of like fighting for views and fighting for logins. We saw the same thing with social media where YouTube for example. Sites like YouTube are the bulk of media consumption. Yeah, there's some sites here and there that have some stuff here and there. You know, professionalized industries that have actually the bulk of the Internet. But when you break it all the way down, that's what we mean by the owners will be the same. There will always be this consumption layer of you know, I'm going to have it better for me as a startup company to build something I can sell to another company, then try to go to the IPO so I can personally become rich versus this need for me to make a home run of home runs and build a company from scratch to go to the stock market. So you're not going to change human nature. And the good news is you don't have to. What is a more important factor here is that the ability for the actual commodification of reasoning is a really valuable part in human history. Think about it. The cost to write text has fundamentally plummeted with LLMs, meaning the ability that anybody can write something is in a whole new place. We haven't fully grappled with that reality. From the knowledge economy to jobs to across the board, the ability for text production dollar per word of prose has gone to pennies. Now imagine what that means for thinking. If it costs now in the way that we're kind of things set up. If you want to do complex reasoning to where a system can read a scenario, think and make decisions. You're talking 20s, $20 in minutes, seconds of thinking and processing a scene, images, et cetera. That adds up really fast. That cost goes down and reasoning is cheap. It's like I mentioned before, it's the same as when access to electricity became cheap. This is the thing that's to watch for is the costing of reasoning, memory and the ability to apply. That is what is going to be good for us overall. So will it be a monopoly like it's always been? Will we have the, the over monopolization in a really powerful way that makes people very, very rich and most people surfs on that world? Yes, we'll have a thousand apps you can use, but only three providers. [00:31:29] Speaker C: So do you think this is a good or a bad thing? [00:31:31] Speaker A: It's so good and bad are, you know, very great. Conversations for a philosopher, you know, I'll leave that to them. But what I will say is the outcome is pretty assured and it's that we innovate in this way today. And unless we culturally change fundamentally how we approach monopolization, not just in this space but across the board, it's the way, it's the only way it's going to work. Because the costing to do this, imagine like a NASA buzzer to go to the moon. For us to put energy into a project like that for these gains or these aims, doesn't have the geopolitical pressure that the type of task requires. So, for example, to go to the moon is we want to not be the ones that are in the global race of military dominance. We don't want to be the ones that aren't capable of managing that arena or that sphere of influence. Well, in the game of reasoning, where is the like powerful pressure for governments to have their own reasoning model? We're seeing it, we've already seen it, I think really interesting project out of Greece where they have one called Sophia AI, which is their first version of a country level model where it has all information of the history of Greece, the storylines in the language of Greek, language writing, and it can be a resource for the civilians. But at the grand scope of solving this problem, the main players are going to be cloud companies. They have the money to burn to build the architecture systems to drive this. Bigger is not better though. And so that's where the green light is. There will be teams that solve specific tasks with smaller models. There'll be team and individual innovators like myself who create smarter reasoning and cheaper reasoning architectures that can allow for better facilitation. So even though it's going to be monopolized, we'll still be in a place like we are today with the music industry. Anybody can make music. You can find music anywhere, you can download. There's 70,000 songs made a day. Imagine 20,000 apps being produced a day. There's going to be so much innovation from this that from the consumer perspective you won't really care. [00:33:47] Speaker C: So where does that leave sort of entrepreneurs or people that are, you know, creating a tech startup? Do you just see, like I mentioned before, they'll create something and eventually they'll be bought by, I don't know, Oracle or Cisco, whoever. And then the same old thing just keeps is happening like a conveyor belt. They'll just buy the next one, the next one, the next one. Is that what the future of tech startups will look like in your eyes. [00:34:11] Speaker A: That'S what it looks like today, I would argue. And that's what it's been look like. So Salesforce bought slack, Microsoft bought LinkedIn. If you go down the list, we're already there. And so what I'm describing is that pattern continues and the arenas shift. So right now the arena is language models and language processing that shifts into world building in that space. The question more so is how much do the industries that benefit or leverage that technology become world adopted? So for example, how adopted does robotics become? Does it become limited to the arena of manufacturing or does it become we have robots in our homes? Does it become people leverage self driving cars like we do today? It's kind of a novelty. Or is it that our entire transportation system is mostly self driving? Like those decisions are the factors on how much these become features of your life or they become the core fundamental substrate of our life, like the Internet has. But the thing is going to stay the same. Core players are going to be the main owners and it's almost going to be like an ice cream shop where you can get 150 flavors, but it's the same base. [00:35:28] Speaker C: Okay, so speaking of shifting arenas now, I know we sort of touched on it before, but I want to talk through the boom in the cyber sector. Considering it's a cyber podcast, what are we realistically looking at the down the barrel at like what do you believe? So if you and I do another podcast interview at the end of the year, what do you think? Sort of gonna come true during 2026? [00:35:51] Speaker A: Can I say something to your audience directly? Y' all my people, cybersecurity folks, y' all gonna be doing really well. Matter of fact, if y' all have anybody on this call, there's chance that one of you on this call is gonna be a millionaire billionaire coming soon. Here's why. The amount of arms race increase that language models has done is already known. We've seen stories about how different nations have now used things like Claude to do cyber attacks. We've already seen order of magnitude increase of the potential of deep fakes, just unprecedented capability. So from a cybersecurity perspective, there's pressure on the arms side, meaning bad actors using these technologies is going to be at a flurry rate we've never seen before. Also on the shield side on the ability for us to protect, monitor and evaluate threats and risk faster is going to be unprecedented. So there's no way this market doesn't boom. There's no way you don't benefit from a Wind a windfall of new companies that come up that get acquired. Literally you could work at six or seven companies over the next five years and retire with huge, you know, stock portfolios. Because even if you just play the laws of averages, not all of them need to boom. But there will be acquisitions all along the way. And that leads to one key thing we've already seen in America here in the States, majority of the actual stock market performance, 90% of it, large portion of it have been AI driven. About seven companies, maybe eight max are the driver of the entire market. Otherwise we would be in a technological depression and you know, further recession. So why that's important is when both sides of the pressure of arms and shields insecurity are on an upward threshold, there's no, you know, opportunity that's not going to see, you know, an impact from this. And so we're seeing the same thing in the drone industry. So for example, the evolution of the modern war machine and how it works and leveraging drones now and technology has totally changed how the theater of war works. That's driving a ton of pressure for you know, ability to have better drone technology as also as well as better drone defenses. Imagine that same thing when we have world models. These tools can analyze your entire cold day base every day multiple times and look for threats and holes to come through when people can deep fake a video themselves. Call in any of those old softwares that hey, show me your face to get access. Those aren't going to work anymore. So this is the arms race opportunity that I see for cyber security. You guys are going to have a great time. You already have been winning I think along over the last coming we've seen with all the deep fakes that's only going to continue to go up. [00:38:52] Speaker C: So then I want to ask a little bit of a left field question because I think this is important. I think it's something that often younger folks ask me is about job market, right? Like, like it's hard. Now I'm hearing people saying it's hard for me to get a job because I don't have like any of experience and people want at least one or two years. I don't even have that. I can't get that. And I know that and I'm not, I'm just looking at it neutrally. I know people are saying yes, but AI creates other jobs and all of that. I'm not negating that. But the part I'm curious is to get your thoughts on what realistically we're going to see Because I'm still seeing lots of businesses hiring people, and depends on what companies you're looking up. They're like, oh, but Salesforce laid off all these people and so did Verizon. Yes. But then they also invested in other areas. So talking a little bit more about this because I think it's important because we're going to need the next generation and you know, who's. Whoever's underneath them as well, to take us through to the end. Right. So it won't just be our generation. So I'm really curious to hear your thoughts on this, because I know that you're obviously out here speaking to people all the time. [00:39:51] Speaker A: Yeah, there's a lot to this. And I'll shout out to my economist. There's so many different works that are coming out of institutions. Shout out to Tufts University, Stanford. There's. I mean, just across our entire nation, there are a lot of research going on here. And so I can't do justice to that profession that really go and say something, you know, to substantiate in detail the thoughts I'm going to share here. But I know there's a ton of content and work out there that will, you know, substantiate but also give more perspective into what I'm about to describe the. In the words of millennials, that's from my age group, shout out to everyone that's maybe in my bracket that may be listening. We are feeling like we are for the first time hearing slang that didn't come from us in a way, like, you know, we got new terms out here. Well, the term that the people are using is called cooked. When people. Someone says you're cooked, that means it's not good. Now, I mean that in a very positive way for a lot of things. One is the ability for you to make the world bend around. Your ability to reason and think is completely unmatched. You don't have to go to a college to get the skills you need to be successful in life. Now, shout out to college. I, I'm, I'm a big believer in my alma mater. I went to St. Louis University. I'm a big descendant of the Jesuit faith. And, you know, being able to go to school and learn and grow with your community, it's more than just learning the subjects. But what I really want to emphasize with this community is it's the same thing that's happened in the music industry is happening in the knowledge economy. And that is it's going to be easier than anybody to make anything. So your moat is not what you can do is who you know and your ability to leverage that to an audience. And we've seen this already occur where, like some of the biggest, you know, ability for influence today comes from social influencers. Distribution, community is the moat. If you literally take a class of students and we can just make this a simple example, if all those students in the class decide to make being smart cool and they all work together, they all study together, that whole class can have high performance. Culture and community and distribution are where I, if I was young and coming out, where I put my focus and through that, I would focus on skill building. And what I mean is like working with the ethers of your time. If I was in the 90s, it might have been the Internet. If I was in the 80s, it might have been mainframes and databases. If I'm now in early 2000s, kind of where we've been, I might be in influencers or today I might work with reasoning models. So there's always going to be a substrate of the time. But what's uniquely different, the costing of learning and doing is cheaper than ever. In the knowledge space, in the knowledge economy, that's an opportunity for you to really focus on distribution and community. Those are the superpowers that then you can bring products, you can bring partnerships, you can bring brand deals. But listen to this part. What happens when 90% of the Internet is bots? We're at 50% now. Does that hold the same. The whole way that branding works is I do these things like podcasts and promotions because I know a human's going to see it. We're already in a world where optimizing. For decades, we've been optimizing for SEO what the search engine sees. We're now entering a world where we're optimizing for what the language model sees. So positioning your products, branding yourself, being seen to where people make purchases without even thinking. Chatgpt told me it's almost like Zeus said, so I gotta do it. Zeus. I mean, it's Zeus. How am I to. I can't judge ChatGPT. It's all AGI. So that mindset means we're cooked, meaning the people that are on the trajectory of the whole way that we've done things. You're cooked. But there's a whole other opportunity of lifestyle that's capable and it's a panacea of opportunity. [00:43:57] Speaker C: This is really interesting and you're right on the SEO stuff. So because someone's like asked me about like web traffic the other day. I'm like, it doesn't really matter anymore. They're like, what do you mean by that, kb? And I'm like, well what I mean by that is people at a high level are using chat or whatever they're using to do a high level discovery, then KBI media will pop up, then they'll go on the site and they'll start to do their deep dive reconnaissance. So dwell time is more important nowadays than high level traffic. So are you starting to see even like, let's just focus on media for a moment. And you spoke a lot about distribution. What about there's a lot of lack of trust in mainstream media. And you would see in recent times the White House has wanted to open up into independent media, media and podcasters etc to give a bit of variety and not have a lot of their views predicated on these large players. So do you think there's opportunities for independent media folks to get into that sort of space as well? Or like, what are your thoughts? Because now a lot of these big players have entrenched themselves over the years with SEO. But you said before, being big doesn't necessarily mean a good thing because then they've got to be able to have the now, the knowledge, the velocity to be able to think. We need to start looking at the geo stuff now. So what are your thoughts then on that, Sam? [00:45:22] Speaker A: Oh, this is the fun part that I I see in media. We are an industry of waves, so we are currently in a place where we came out of a previous wave and we're in a different trough. What I mean by it is there was a time when you wanted the authority of the mainstream. The mainstream was a good thing because it meant it was professional, polished, it was quality. We saw that with all the way down to American politics, the type of features and traits we wanted in our president. Now, not to go deep into politics because that's a different angle of it, but it's the same thing where our preferences evolve from our tolerance over time, meaning we'll like something until we don't. We'll enjoy that process and that experience to it reaches this law of diminishing returns and then we're actually repulsed by it. We see this at the most base level in fashion. Baggy jeans was cool for a while when I was growing up. Then it became skinny jeans. Guess what? You know, most of the young kids are wearing today in the United States baggy jeans again. So that same cyclical behavior is not, it's not like a unique phenomenon. It's how Complex systems work is at the root between randomness and order. Things that have an ability to oscillate between states, those are chaotic systems. And when you think of like reasoning, consciousness, society, an ant colony, these are all chaotic systems. So there's, there's a universal substrate that's driving this. So shout out to all of my philosophers, particle physicists, there's. There's so many people that have a depth of this understanding, can explain it more of why we see it. But what you're going to see as an emergence of this is we are currently in a place where the micro influencer is going to be more valuable. People are going to be seeing so much AI generated stuff, so much in a way over branded production, that they're going to want something that seems like, oh, I can trust this. Oh, this person is not a shill. They're actually telling me what they think. And so I see the micro influencer and what I mean by that. You know, that could be a term that has a lot of meanings is people that have smaller pocket communities with high engagement. If I am, you know, Microsoft, I might want to work with four or five micro influencers that them seeing me work with them will light up their channels. From all my rap fans, anybody out there I gotta give a shout out to anybody listens to rap music. You might have known this as the Drake stimulus package. For anyone that knows Drake or listen to his music, there was a time before the, you know, Kendrick Lamar battle, shout out to all my people that know about that. There's a time when he would get on a song of a smaller artist and the whole thing that would boom. It was like, yo, Drake's on this person's music. So he would get the stimulus, that new person being thrusted in the celebrity. We have that same thing happening in the media marketplace where big companies will pay instead of, you know, just paying someone, you know, shout out to Gartner or some traditional media like Newswire, they'll bring in a bunch of micro influencers in different niches to augment that. Today it's augment, but there's a rolling trough of this. We saw recently podcasts from traditional media where we had presidential candidates coming on podcasts. This is a proving point of where we are. So the reason why I wanted to do this for this audience, cybersecurity. You are phenomenal people. You can put together three or four dots and make a picture. And it's important in this understanding to see how this is not something limited to our space. These are Macro factors that are driving across different arenas and spheres that all have the same drivers. [00:49:15] Speaker C: And I wanted to illuminate that because it's important to see that, yes, it's tech, but it's also other sectors then as well. And even to your point, like, look at what happened to Sachs, right? No one probably thought that that was going to happen and it did. So I just think it's very interesting times. I think that it's important for people to listen to podcasts and get your source from everywhere. Listen to people yourself, Sam. So is there anything specific you'd like to leave our audience today? Because this is such a fascinating conversation. We, we went around the world on talking about things, but I think this is important because you were really at the coal face, Sam, of doing this type of work. And it's something that, yes, it's a cyber podcast, but also I wanted to zoom out a bit more and just talk about the game that's happening with AI and what this actually means. So, please, what are your final thoughts? [00:50:04] Speaker A: You know, shall I. Hopefully you bring me back some point so we can come back on these. I really love to give you guys the part two for more details. But for this moment, the ability to be at this seat, you are at the right time, at the right place. The ability for you to work at more core companies that are working in cyber security is a privilege because as those companies get acquired, as new technologies come out, you have an opportunity over the next maybe seven to eight years to work at, let's just say the average 10 years, two years to get your stock, half your stock from your portfolio, your package. So for anyone that's in the working space, I recommend you take advantage of this opportunity, work at four, maybe five companies over the next five, you know, 10 years and get those stock options. That's number one for the average person. And again, I'm not an advisor. I can't tell you that. You know, shout out to any, any financial advisor. It's just a guy. I'm just a guy. But what's a pressure that you cannot deny is that the AI boom is going to put a pressure on electricity and water. So if you invest in electricity or energy companies and energy portfolios, you're most likely to be okay. Just in general, that's not like a smart guy. Can't, you know, all the disclaimers tax all that there. If you're a younger person, your ability to have a malleable mind that's not anchored on the way we've done things is a superpower leverage that explore these technologies. The area that I think is of the now that's very valuable specifically for cybersecurity is advanced reasoning and memory architecture because the biggest booms in security will come from those areas. The ability for us to have security systems that better reason and better manage memory. So there's a plethora of things you can do, but to be able to put yourself in these areas to benefit for this audience specifically. Hopefully you'll you'll look back on this call and we'll chat again. At some point you'll be like, yo, that guy Sam helped me out a little bit. You can share a little bit of your winnings to my my son's college fund. [00:52:12] Speaker B: This is KBCast, the voice of Cyber. [00:52:16] Speaker C: Thanks for tuning in. For more industry leading news and thought provoking articles, visit KBI Media to get access today. [00:52:25] Speaker B: This episode is brought to you by Mercset. Your Smarter Route to Security Talent Mercset's executive search has helped enterprise organizations find the right people from our around the world since2012. Their on demand talent acquisition team helps startups and midsized businesses scale faster and more efficiently. Find out more at mercsec. Com today.

Other Episodes