February 07, 2025

00:35:50

Episode 293 Deep Dive: David Trossell | How To Move Data Faster Over WANs

Episode 293 Deep Dive: David Trossell | How To Move Data Faster Over WANs
KBKAST
Episode 293 Deep Dive: David Trossell | How To Move Data Faster Over WANs

Feb 07 2025 | 00:35:50

/

Show Notes

In this episode, we sit down with David Trossell, CEO and CTO of Bridgeworks, as he discusses the advancement of WAN acceleration technology and its impact on data transfer speeds. David explores the historical evolution from broadband connectivity to modern-day WAN acceleration, highlighting the limitations of traditional WAN optimization methods. He explains how AI-driven parallelization can address latency issues and significantly enhance data throughput across networks.

Additionally, David provides insights into misconceptions about WAN acceleration and optimization, emphasizing the importance of secure backups, air-gapped systems, and the resurgence of tape technology for robust data protection against cyber threats.

David Trossell is a recognised leader in the storage technology industry. He is CEO and CTO of award-winning WAN Acceleration company Bridgeworks, where he holds 18 technology patents. David is also committed to supporting British STEM initiatives and developing technology leaders through UK university and college apprenticeship programmes.

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: The only way in reality to recover from any cyber attack is having that golden backup, that air gap backup. If you're transporting data and it's valuable, then encrypt it. We don't want to encrypt the data for you. If you encrypt the data back in your systems, you keep the keys back in your system. So they're away from us, they're away from the wan, they're away from the rest of it. [00:00:27] Speaker B: This is kvcz. [00:00:29] Speaker A: I'll be completely silent as a primary target for ransomware campaigns, security and testing and performance can comply. We can actually automate that, take that data and use it. [00:00:43] Speaker C: Joining me today is David Trussell, CEO and CTO from bridgeworks. And today we're discussing how to move data faster over weigh in. So, David, thanks for joining and welcome. [00:00:53] Speaker A: No, my pleasure. Absolutely lovely to talk to you. [00:00:56] Speaker C: Okay, so let's start right there, WAN and acceleration. Walk me through it. Whatever comes to mind when I ask you that question. Because I know it's obviously a very big area, widely spoken about, so really keen to start wherever your mind takes you. [00:01:11] Speaker A: I think to understand where we landed up with WAN acceleration, we have to go back to the comm era, where we started to have broadband connectivity, which was a huge sleep up from dialing in on modems every so often. But that also created a need to move data around, but the networks really weren't there still. We're still down into megabits rather than the megabytes of data. One of the things you've got to understand there is the restrictions on the technology. It wasn't really until the early noughties that we started to get some dedicated Ethernet connections between customers and telcos and the rest of it, but they were still only 10 Mbps. And we started to see also satellite offices appear as well. Again, you know, trying to communicate them to the head office is a bit more difficult over a modem. But one of the things that popped up out of that lower bandwidth and remote offices was a technology called WAN optimization. Now this is quite clever, it's a very good idea. So the idea is you had two units at either end of the WAN connection and rather sending all the data across the wan. What it did was to compress it, what they used to call deduplication. So, and then you would then store that data into a table which you could reference and pull back later, which is very good. It was absolutely an excellent idea, which means you could run theoretically higher speeds over very low network connections. But it did have a limitation. You know, you had to do all this compression, you had to store all the data either side of it, which was good. But it, when you want to move up into the megabits and gigabits performance, it just wouldn't handle it. It just doesn't go that fast. And that was where we sort of stuck for the time being, until we had a. A request from one of our distributors that had a site in the north of Sweden and a site in the south of Sweden inside a mountain. So you can imagine what sort of people they were. And they wanted to connect their backup system across that network and they had an ethernet link, but they still couldn't get the storage area network protocol, which in that case was Fibre channel, over that link and down into the mountain in the south of Sweden. So he came to us and said, you know, can you do anything for us? So we took one of our, what we call protocol bridges and put in between what we call WAN acceleration. So this is a process driven by AI which paralyzes the flow of data across the network. You can't get rid of latency. It's a fact of life, you know. The only way you can reduce latency, which is one of the big impacts on performance of tcpip, is you have to think, move things closer together and that's governed by the speed of light. We use the AI in conjunction with a program with Portsmouth University to develop this situation where we can actually fill the pipe up with data. You can't get rid of the latency, it's just there. And it also managed the packet loss and congestion and that's where we renamed it from WAN optimization to WAN acceleration. [00:04:31] Speaker C: So talking about latency now, I mean, there's a couple of things in there. You said it's quite interesting. I want to get into them because often you would know better than me, David, that people complain a lot about latency. And you were just. Basically what you said is, we won't be able to fully remove that. So when you're speaking to customers, what. And the latency sort of conversation chatter comes up. What. What are sort of main people's concerns on that front? Anything you can share? [00:04:58] Speaker A: Yeah. If we're using TCPIP to transport the data, which is been around since DARPA days in the 70s, it's a great technology because it makes sure every packet gets there, whether it loses it on the way or not. The idea that everything packet will get there in the end. But this problem with latency is that the speed of light is just not fast enough. That might sound silly, but the speed of light is what, 300,000 kilometers per second in vacuum. But if you, soon as you squirt that down a piece of optical fiber, you lose a third. So you're down to 200 kilometers, 200,000 kilometers per second. And if you think that between London and New York is about 100 milliseconds, that's quite a long time to go out there with your packet and then to get the acknowledgement that that packet has made it to the other end and that acknowledged to come back into London. So really you can only do that round trip ten times a second. So that's the effect of latency. So unless someone comes up with the ability to move data faster than the speed of light, we're stuck with the speed of light. So you have to take another approach, and that is that rather than trying to get the data down, is to get the amount of data you transmit up each time. So what we do with the artificial intelligence is create parallel streams across this network connection. So we fill the pipe up with data. So the first byte's going to take its hundred milliseconds to get there, but all the other bytes are coming along at high speed behind them. So the idea is you move the data as fast as possible across the network by you maximize the utilization of the network. So that's where we've gone from 1 gigabit per second to 2 gigabit per second to 10 gigabit per second to 40 gigabits per second. And we're now trialing out 100 gigabits per second transfers. [00:06:50] Speaker C: I want to go back a moment. So you mentioned the amount of data you transmit. Would you say with your experience companies don't understand that phrase that you just mentioned, or how to do that effectively to, you know, to reduce the latency. [00:07:07] Speaker A: For example, you can't reduce latency. The only way you can reduce latency is to move the two ends together. That's the thing that a lot of people don't understand. That's why if you go to New York or London and you see the high speed trading latency is down into microseconds. So they actually put their computers right next to each other, if you see what I mean. That's the only way you can get latency down. How you can actually get the efficiency of that network link up, say, between New York and London, is to fill the pipe up with data. So you're going to move more data quickly across the network and you use that parallelization technique to do that. And that's all run by AI. The AI is quite clever because it will look at the network and say, can I get more on that network or can I get less on that work? Am I getting packet loss, which could be caused by switches or other people getting on the network at the same time? Do I actually change the size of the packets? Do I make the packets smaller? So if I do lose a packet set, I only have a little bit more to reconstute, constitute and push back. And it would do things like, so it changes the parameters it has for the network, how the data comes in and how the data flows out about the two ends. Hopefully that makes sense. [00:08:23] Speaker C: Yeah. So, okay, so when you're saying that, would you say that's probably a big misconception then like you said, you can't reduce it, you can't reduce the latency. So would you say people, and when I mean people, I mean companies out there that you're speaking to, would that be their assumption a lot of the time that you can reduce it, but you're saying you can't, would that be sort of the main issue? [00:08:41] Speaker A: That's right. You can mitigate the effects of latency, but you can't reduce latency unless you move it closer together. Unless you run things in a very, very straight line, which you can't do if you're going across continents. However, what most people do these days, if they have a poor transfer across their networks, they tend to talk to the telco first. And the first thing the telco is saying is, well, you need a bigger pipe, it's going to cost you this much and that will make it go faster. But the effect of latency means you don't get any increase in performance because it still takes that amount of time to get from one end to the other and back again to get that all important acknowledgement that you have to have before you can send the next set of packets. [00:09:26] Speaker C: So you said before the only way to really get around it would be to move it closer, for example. But now with the way the world's moving, with this whole people working from anywhere, traveling, working remotely, that as sort of a, a solution is probably not the reality, would you say? Because even now, like people aren't even going to the office five days a week, I mean, in Australia they may be going once if they're lucky. So how do you see this, how do you see this unfolding then? [00:09:53] Speaker A: So, okay, so if you're working typically what a lot of People now is put the WF home, WF 8 people close to a cloud. Most clouds, you're fairly close. And if you think about it, if you're using an application in the cloud, you don't really notice the latency. You will do if you're using an application in the uk, but the application resides in New York. You will see the effects of it. You see the effect when you're downloading a video as well. You know, the rotational dots that go around, that's the latency coming into effect. It's just trying to pull that data across. So that's how you do it. So in reality, latency is latency is latency. And the only way around it is to use WAN acceleration, which is an intelligent way of filling the pipe up. So where that video that you're trying to download comes across at a slow rate because of the latency, we can probably download it in a fraction of the second because what we're doing is we're filling that pipe up with data. [00:10:55] Speaker C: So in terms of other misconceptions people have around when acceleration, is there anything else that you can share as well? Because again, you know, doing this podcast, interviewing people like yourself over the years, and then when they start sharing their thoughts and their insights about, oh, this thing that people think they know, this is sort of the reality of it. So I'm really keen to get into this with you because you've already sort of, you know, shared some interesting stuff so far, which I think people wouldn't know. [00:11:20] Speaker A: I think that in one of the main causes of confusion in the network between ourselves and other products is, is that WAN optimization is all about reducing the amount of data you transfer across a network. So that is that compression algorithm that they use. And WAN acceleration is different. With WAN acceleration, we don't change the data at all. So we don't inspect the data, we don't change the data. If it comes across in gobbledygook, it goes across in gobbledygook. And if you want to accelerate encrypted data, we don't touch the data, we just pass it straight through. So you're getting a fast throughput through our boxes out the other end. So that's the difference. That's the way that we can actually increase the performance across the WAN by moving lots of data in parallel streams, but still have a very low CPU and memory overhead makes us very, very light from that one. The other thing with WAN optimization is you store the data local as well as you ship it to the Other side, we don't store any data on our product. There's no storage whatsoever. So once that last byte has gone through our machine and out the other side, we don't see any more of it. There's nothing left in the box, shall we say. Which makes it very useful in cybersecurity environments. What we can actually do there is if your environment is compromised, we can give you an ISO or a new box and you can load up your configuration from your PC, which you can store on the PC and then start recovering. It's as quick as that. With all the other techniques, it takes you hours to rebuild the networks, rebuild the deduplication tables, et cetera, et cetera. So that's the biggest difference between us and WAN optimization. It's a lot quicker, it's a lot faster, it's a lot lighter. [00:13:11] Speaker C: And do you think people confuse or sort of think it's the same between WAN acceleration, WAN optimization from what you're saying? [00:13:18] Speaker A: Oh, very much so. When we first started talking to people across the US and across the uk, these people, they first thing they do is well, okay, you're just got another WAN optimization. But we tried to prove to them that we didn't, we weren't another WAN optimization. We were actually WAN acceleration that they stored in, you know, they just said it's got the market so up. Interestingly now Riverbed's beginning to struggle with WAN optimization out there. Or we are beginning to see large organizations and corporates encompassing our technology because it moves data so fast. And if you're actually moving your data into a storage location for cyber security, the faster you can move it there, the less chance anyone has a chance of intercepting it from that point of view. [00:14:04] Speaker C: Okay, so going back to Riverbed, why would you say they're struggling? Is it because they're on this optimization voyage rather than the acceleration? [00:14:12] Speaker A: Yes, that's right. Because thing with WAN optimization you've got the local storage, so the more data you pass through it, the more storage you require. Unless you kill some out of it and cull some of the data in there. But that's, you know, it's still a good technology. For small offices, having need to connect to large headquarters is still good. But then again, what's happening with the cloud is people are putting the data in the cloud, so you don't really need WAN optimization out there anymore. But if you want to move all your data from one cloud to another, we can do that very, very fast. [00:14:49] Speaker C: So going back to the optimization, do you envision that this will just become obsolete in the future or not too distant future from what you're saying. [00:14:58] Speaker A: It's still encompassed or should we say embedded in a lot of SD WANs now to try and give you that level of performance. But with one of our customers who wanted to move data from Bangalore to Denver, which is about 230 milliseconds, which is a hell of a long way, you think about it, in time, they had an SD WAN and we bookended their SD WAN with our products. And that took the time that they estimated to transfer all the data from Bangalore to Denver from nearly a year down to a month. Because we can accelerate that SD wan. So once a number of the SD WAN suppliers get hold of this technology and we should see this embedded inside the product to give them that boost as well. Because again, it's not that fast. You've still got, with SD wan you still got the latency issues. [00:15:48] Speaker C: So you're saying with going down the optimization route, it was forecasted to be a year versus acceleration, which was a month. [00:15:58] Speaker A: Yes. [00:15:58] Speaker C: That's like 90% reduction. [00:16:00] Speaker A: Yes. People don't believe it when we do it. [00:16:03] Speaker C: So then talk to me a little bit more. You said the term bookend. What do you mean by that when you say that? [00:16:07] Speaker A: Basically we put one of our units between the source and the SD WAN and at the other end we put another one of our units between the SD WAN and the sync. So we're actually accelerating the data across their SD WAN and it works very, very well. [00:16:22] Speaker C: So going back to a year versus a month. So when you're speaking to clients that you said before, people don't believe us. I've heard that as well from Tony. Obviously he works with you. How do you then start to prove like yes, we can do this within 30 days? Like how do you, how are you sort of demonstrating that, that velocity. [00:16:41] Speaker A: It's difficult. So in most cases we do a proof of concept because people just don't believe what we're telling them. And it's like go away, you're in fantasy land. But if they've got a couple of machines which running on VMware or Proxmox or something like that, which we can put a, a virtual instance on, we can connect them up and we can transfer data across the network. So we have a whole series of protocols. So say we want FTP or Google or Amazon or any other cloud people, we can do that. So we could go in and out of the cloud or between clouds. One of the tests that we did Many years ago was a backup between Adelaide, I think it was, and South West Virginia. So across the Amazon network we actually did a backup for someone to prove them how much faster it was. Again, they were on a phenomenal amount of latency. We took it down to somewhere like 90 times the expected performance. Same as with a bank in the south, in South Africa, which is a big bank that had banks also in London, where they couldn't get their GPR data out of their machines and into the bank in London in time not to be fined under GDPR rules. So we came along, it was a little bit more difficult than normal, but because of their network. But we managed to make their network fast enough that they, that they quickly were in GDPR compliance country very, very fast. [00:18:10] Speaker C: So then you said before the increase in the performance. So is there any sort of numbers or stats or insights that you can share that you know when you are speaking to these customers, like for example the, the GDPR client that when it needs to move that quickly, within a short amount of time, is there anything you can sort of talk through there? [00:18:29] Speaker A: Typically what we would do is we'll maximize their bandwidth. So whatever bandwidth they give to us, we will maximize it, but with one proviso that you can feed us fast enough with the data. And if we tend not to hit the maximum performance of the WAN link is because you're not feeding us enough data. And that's because it's going through the machine so fast and the AI is saying, come on, give me more, give me more. I'm running out of data to send. And that's the way it works at the other end, it will automatically de jitter, which means that if the packets come in slightly out of sequence, it will reorganize them to the back end sequence and it goes through. The whole thing is that it's totally transparent to the protocol you're using to transfer across the data. You can have multiple protocols. In fact, there's some customers that have different. The other beauty about it is what we've done now for the cyber security side with our product on WAN acceleration is the ability to turn off the connection through a calendar option. So that gives you that air gap, that very important air gap between your headquarters and your backup site. And that's key now for a lot of cyber security issues, because where once they used to go after the live data, now they're going to going after the backup data first, corrupting that, then coming back and encrypting the live data and then they ask for the ransom. So there's no way back for you. But if you can air gap that technology and air gap that backup, at least you got a safe copy that no one can get hold of. And that's what we've installed inside our machine now. [00:20:03] Speaker C: Okay, so I do want to get into security, but before we do that, I've got a couple of other questions. You said before the AI is like, okay, you got to feed me more and more data. Like, how much data does it want to ingest? How much are we talking here? [00:20:15] Speaker A: Well, it depends on the WAN. So if we have a 10 gig WAN, we'll be looking to pull in around about 9.5, 10 gigabits of data. Don't forget, we got a. What we've got there is a load of smart buffers in there. So intelligent buffers, that one deals with the incoming data, one that manages the data into various chunks to optimize the way it runs across the network. And then there's another buffer there that sends it out on the network. And at the other end, there's the corresponding buffers doing the opposite there. So the idea is it manages the flow from the ingress to the egress, so it maximizes the performance all the time. And that's where people see some phenomenal rates. I think one of the very early ones that we did was in America, and that was between Phoenix and Rhode Island. And they wanted to transfer data between their two tape libraries, which they failed to do until we came along. We put our units in there, and it took the backup down from 15 to 18 hours, down to 45 minutes. That's the level of acceleration we can give a customer. [00:21:23] Speaker C: So then I want to talk about. You said out of sequence. So you said it. Jitters, and it's out of sequence. How does something become out of sequence? [00:21:30] Speaker A: Well, don't forget what we're doing with the data is marshing into parallel streams. So the idea is the more parallel streams you have across the network, the more it fills the network up, which gives you more data across the network. But some packets might go a different route. So they will come out. A lot come out of sequence. So what we will do there is, because we're handling so much data, is we've put these back in sequence. So they came into us in sequence. They might get lost and jittered as it goes across the network, but the customer will have it back out in sequence as well. So the whole thing's totally transparent to the. To the two servers, the source server, and the sync Server. [00:22:08] Speaker C: Okay, so I have a question now about cost. So when people in the world think something's faster, they automatically think it must cost more. Now I asked this question and what comes to mind with the analogy I'm going to use is when you go to a car wash and there's like the express car wash, somehow that's faster, it's more expensive because they do it in 15 and not 45 minutes. You don't have to wait around or you know, for a couple of hours while they get around to it. So there is this sort of connotation with something being faster that's going to cost more. So would that, would that be a fair assumption? [00:22:40] Speaker A: Would you say yes and no? It's one of those two double answer questions. Is the lower end from our product where we're very, very competitive up until 10 gigabits. Beyond there we're super competitive. But it requires more performance out of your memory, more performance out of your CPU to transfer this data across. So as soon as we get up to 40 gig, we're into more expensive controllers and CPU and memory systems. As we hit 100 gig, we're into much into again have a lot more memory, more CPU cycles. So there is a limit how far we can actually go from that point of view. How valuable is your data that backup, if that's your only backup you've got and the cyber, cyber naughty people have corrupted all your other backups, that one backup is very, very valuable to you. So that's the difference, you know, how valuable is your data? [00:23:35] Speaker C: Okay, so let's now switch gears and let's flip over to security. So let's talk about backups just for a moment. Would you say, David, in your career there's a lot of companies out there that don't have, I've heard, don't have any backups. Maybe one or even that's questionable. What's your view then on that when it comes to backups, the more the better. [00:23:54] Speaker A: There's, you know, there's, there used to be the 3:2:1 system, but there's 3:2:1:10 now. You know, it's the first call of port for the people who want to corrupt your backup. And then you start asking for the ransom money and people are still paying the ransom money. It's still amazing that people aren't putting that amount of money into the backup systems. Because once you've got an off site golden copy, you can restore, you can restore your data. It might take you time but you're not going to Spend millions of dollars paying someone to do it. And we know roughly from what part of world they come from these days. So backups are important. I spent most of my life in backups, working with tapes, all sorts of bits and pieces. And the thing that one of the things that most people fail to do is thinking about how you're going to recover rather than how you're going to back up. Because your backup process will determine your recovery process. And a lot of people don't think that way. And it's people just put up to put it up to the cloud or put it up to this side. There's no guarantee you're going to get it back from the cloud. The cloud don't guarantee your data will always be available. So it's up to you to make sure your data is available. Again, how quickly can you bring your company back up to strengths? That depends on your backup technology and whether you've actually optimized your backup technology. Well, that optimizes your recovery systems. [00:25:19] Speaker C: Okay, I want to talk about, you said before, like people paying the ransom. So what I, what perplexes me perhaps is like you said, there are people out there that are paying these ransoms and you know, a lot of money, some of them which could potentially at times exceed the cost of just doing the backup. So I didn't, walk me through the logic here. Is it because people didn't get around to it or we forgot about the backup, or we don't know what happened to the backup, or the guy that was running the backups left 20 years ago? Like, I mean, this happens a lot to people. Like, I just, I'm just trying to understand, like as you put it in that way, it kind of makes sense that, well, you don't really need to pay it then because you got the backup. [00:25:55] Speaker A: How often do you, do you miss paying installments for insurance? Backup has always been the Cinderella of any IT systems. And people are beginning to put backups into the cloud again. How secure is the cloud? Can you guarantee your data in the cloud? And if it's in long term storage, how long is it before you get it back there? You're sitting around waiting. And most people that pay, those that demand a ransom don't get all their data back. Very rarely do they get all the data back. So again, they've got to recreate some bits and pieces. You know, it's typically, you should have one site, you know, two, you should have two backup sites and they can be cloud, they should be two different cloud providers. So you can actually, you know, one cloud provider goes down the other one, I hear some people putting backups into a single cloud provider that's got two halls, which is if the electricity goes down in both halls or you have a fire in both halls, you've lost your backup. So again, it's putting your data in dispersed areas. We've done it over 5,000 miles. If you want to, we can do that and you can do that quite easily with our product. But also is from that point of view is having the data in two different places and then having it on two different pieces of media and having one on site and one golden copy that's perfect, which you've tested, locked away an air gap so no one can get hold of it unless you, unless you actually have the ability to get hold of that data. So that gives you that length, that depth of resilience on your backups. And you, you know, if you look at the articles that are going on now in various magazines is everyone's talking about backup because it's the only way really if they, you know, if you don't, if you don't get the keys to unscramble your data, you're stuck. And that's why it's becoming more and more important. And it's strange that tape again is beginning to come back again with LT09 tapes and the huge capacity you can get on those things now, you can get up, it's safe because you can export it out of your library, you can put it somewhere else. Air gapped. No one's going to touch that. That's the beauty about tape backups. And I'm sort of a big fan of tape backups. [00:28:09] Speaker C: So going back to you said before, there are companies out there that have their backup in the same cloud. What would be the rationale there? It's just, it's cheaper to do that other than if you're, you know, in two clouds, it's going to cost a lot more. [00:28:20] Speaker A: So the backup, you think, yeah, you know, well, we'll stick that as a backup, you know, here we do three copies. We do, we do a backup to a cloud backup to tape and back up to take, which we take offline. And that's where only a small little company from that point of view, amount of data we have. So we got three copies. And that's what we, you know, that's important for us because it saw down software based across that lot. So even if you just do that for a small company, for a large company, like a large insurance company in America, where they were having difficulties completing the backup, and this was a big IBM shop, is that they were using dark fiber between the two data centers to send the backup data across, and they were never succeeding in getting all the data across. And we were pulled in to say, you know, can you help us? And we said, well, you've got dark fiber. What do you mean, no, we're just not getting the performance. So we put our product into there and we ran a whole series of performance tests which gave them a much better performance than raw dark fiberglass. Because when we're talking to storage area network devices like tape drives and disk drives, there's things you can do with the SCSI protocol to improve the performance above and beyond what we're giving you with the parallelization. So it's quite a big step forward. And before we finished all the performance testing, they put it into place because they could see the difference it made. What they were doing to get over this beforehand was doing a backup to a flash copy to another disk array, then another flash copy to another disk array, and then they would try and back up the third disk array across the site to the other data center. And it was never succeeding. So what they used to do is do a backup off that again, put all the tapes in a van and ship them across to the other data center and put them back into the tape library. So this was going on and all the time tape drives were underperforming and that caused them to break and they were breaking tapes and everything. But because we could use that dark fiber, with our knowledge of the SCSI protocol on Fibre Channel, and then with our parallelization techniques, we could keep those drives spinning and streaming and their tape usage went down phenomenally and the amount of errors they had on their tape drives dropped wrongly. So we had a secondary effect from that point of view of being able to keep their tape drive streaming nicely. So that, again, that was M4, that's about 11 years that was in there before they decided they were going to go to the cloud, which that's fine. Off you go. It's interesting now that most of the cloud providers back up their data to tape. So we just move where the tape goes. [00:31:06] Speaker C: She said before, it's almost like pendulum swinging back the other way. So you're saying now more people are opting for tapes? [00:31:12] Speaker A: Oh, yeah. The growth in tape is continuing. [00:31:15] Speaker C: It tapered off for a while when sort of the cloud really emerged in a little bit more. [00:31:19] Speaker A: Yeah. You know, we used to do backups to tape to Disks, which is a great idea until you looked at the reliability of disk and you look at the reliability of tape. You know, tape reliability is phenomenal. As one large bank manager once told me, you know, once it's out, once it's out, the tape library in store, no one can change it. That data is safe. And that's, that was a big thing to me that, you know, I hadn't thought about it that way. And that was the way he did it. He exported 500 tapes a day out of his big tape library. It was huge. It was huge. But he said they're the key tapes, they go into storage, no one touches them. That's my recovery. With cloud, you've got to make sure the cloud providers got your data because they don't guarantee it and they don't guarantee it non corrupt as well. But if they put it onto tape then you've got that whole time basis for them to pull that back off a tape into your cloud instance and before you can put it back into your workshop, into your office. [00:32:20] Speaker C: So it's fair to say that we will see a resurgence of the tape era. [00:32:25] Speaker A: Yeah, even you know, if you still look at it. My son worked in the tape industry in America. He was saying they were just selling tape libraries. I mean, so a lot of the cloud companies are using tape libraries. Nothing. I've heard that they've been designing their own tape libraries. I don't know how true that is, but there's people using tape. It's still the safest way of securing your data. [00:32:48] Speaker C: The only thing is that I work for a corporation. Maybe about 12 years ago someone came to collect the tapes, couldn't. Something happened, certain amount of tapes went missing, they were never to be found again. I mean that's obviously probably a rare thing, but I'm assuming that does happen. [00:33:00] Speaker A: Oh, I think one of our resellers who used to work with her tape encryption company, he went to New York and he found a. Let's not say, let's not say who it was. But he said, yeah, how much is it for tape? He said $25 for an hour. So they would, you know, how many tapes do you want you can read? And they wouldn't. And the guy would sell it out the back of the van as long as you sent it, as long as you gave it back to him a little bit later. And that was the going rate, about $25 for a tape. So yeah, there's a big leak there. [00:33:30] Speaker C: So then how would you, how would you recommend then securing that Exactly. So someone come in and do the tape collection, but then, I don't know, they have a car accident, their whole car incinerates, we lose the tapes, they do exactly what you just said, they stop over and people are buying it. [00:33:44] Speaker A: So if you have the ability to make a local copy and then there's that same ability to make a remote copy without too much impact on performance, and that could be another data center, it could be another, it could be another office or data center that you own from that point of view. So if you own two data centers, even though they're 3,000 miles apart, we can still transport that data between that. So you can have a local copy and you can have a remote copy on the other data center can have a local copy and then send a remote copy back to you. So you can do cross site replication. It's easy. [00:34:16] Speaker C: So David, do you have any closing comments or final thoughts you'd like to leave our audience with today? [00:34:21] Speaker A: The only way in reality to recover from any cyber attack is having that golden backup, that air gap back up. From that point of view, if you're transporting data and it's valuable, then encrypt it. We don't want to encrypt the data for you. If you encrypt the data back in your systems, you keep the keys back in your system. So they're away from us, they're away from the wan, they're away from the rest of it because we can accelerate encrypted data. To us it's all just ones and zeros and that's the best way of doing it. Then you can have that off site capability, whether it's in someone else's data center or another data center of yours. That's just that point of view. [00:35:09] Speaker B: This is Kabycast, the voice of Cyber. [00:35:13] Speaker C: Thanks for tuning in. For more industry leading news and thought provoking articles, visit KBI Media to get access today. [00:35:21] Speaker B: This episode is brought to you by MrKSEC. Your smarter route to security Talent Mercse Executive Search has helped enterprise organizations find the right people from around the world since 2012. Their on demand talent acquisition team helps startups and mid sized businesses scale faster and more efficiently. Find out [email protected] today.

Other Episodes