Hi everyone, thank you for joining today's webinar automating cloud disaster recovery. Before we kick off today I wanted to cover a couple of the housekeeping items. So at the bottom of your screen you'll find a Q and A box. Please feel free to submit any questions you have during the webinar and our presenters will answer them at the end of the session. The webinar is also being recorded and this will be shared with you after the session today. And with that I'd like to hand over to our presenters Walter Kemrick and Kieran Gutteridge. Great, thanks Hannah. And thanks everyone for attending today. I think we have a real great webinar series for you all about leveraging automation and your technology resilience for the cloud. So as Hannah mentioned, we're gonna go over some of the cloud principles. We'll touch upon some insights that we have, just kinda get things kickstarted. Then we're gonna dive into a little bit of the automated provisioning options. We'll get into those a lot more depth as we talk about automation and where those domains kind of fall and how you need to be able to attack those different types of integrations and automation for your technology resilience. Coupled with that, we also wanna stress a point around the involvement of people. And so we'll bring in that, we'll give you some examples exactly how we're leveraging our AWS platform with automation and how we bring people into the whole process. And we'll round out with some different types of recommendations and summary, and then finalize it with some question and answer period. So got about an hour today. Why don't we go ahead and get started? Again, I'm Walter Kenrick, run our marketing, product marketing team. And with me is Karen Goodrich, our CTO. Okay. All right. So just kind of get everybody, you know, in the same mind space as we are right now. If you look at it, if you're on premises, you know where all your apps are located, you know, it's there in the data centers, you know what rack they're in, you know exactly where they're located. But when you get into more of a cloud environment, and especially as you go to more cloud native type of architectures with containers or microservices, then you have a totally different model of where those workloads or services are located in the cloud, because you're dealing with availability zones, you're dealing with different regions. And so the whole component of leveraging automation even becomes a lot more relevant. Right, right, Karen? Yeah. I mean, that point, once you're in the cloud, it becomes a necessity. And as Walter said, it's often the case where you don't know where things are. You've gone from that ability to with non prem application. You know where it is. You know what part of building it is. You can see some of the resilience even if you've got fire extinguishers, etcetera. But once you move into the cloud, on purpose, you're dealing with layers of abstraction and you need to understand all those layers of abstraction so that you can move at speed, which is the one thing the cloud does give you. Right, and getting to that productivity, that efficiency improvements that you really need across the entire stack that you would have. We found this interesting. So we worked a lot with Gartner and they had actually did a study across several IT leaders in terms of where they were in terms of phase and process of moving up to the cloud. And so this is kind of how they're framing out their cloud strategies and some of the benefit of drivers that they found in this study. And one of the real interesting aspects that they did find was a focus on for enterprises to really focus on the people in the process over the technology early in that adoption process. So really kind of understand making sure that, instead of driving from what the IT value up to the business value is to kind of take a different model here. And when you're migrating over to the cloud is to really kind of flip that business value starting point. And so instead of going from maybe the bottoms up, now you're looking at a kind of a different model where that business value, maybe you've already got the market share, the business performance, but how do you bring that down across your people and the various technologies to be able to achieve some of those different types of capabilities and proficiencies that they would get into the cloud? Some of that is just because certain constraints are different when you're on prem spinning up new infra takes time because you've actually got to get physical hardware in place. Whereas when you start thinking in the cloud, you know, hardware goes away. You can you can spin things up in any major cloud provider in under fifteen minutes. And so actually your controls need to be more around that and making sure you're turning things off if you're running experiments, rather than how do you turn things on. Your constraints become very, very different as you start to operate in the cloud. And It's where again, you need the automation to have things like remember to turn these services off, or if you're provisioning databases or copying data, that there are controls in place that means simple things like you might have a company requirement that all data must be encrypted at rest. So you need that automation that makes it impossible to spin something that a database or data store that doesn't have encryption at rest enabled. Yeah, great insight, Karen. And we found this other study, which we thought was kind of interesting, just kind of set the tone. It was done by Harvard Business Review Analytics Services. And so they went off to over three hundred business executives, trying to understand kind of where they were in their whole cloud journey. And a couple of the case points that they were able to bring out were eighty percent of the respondents say that adopting IT automation is extremely important, very important. So they're already starting to see that they need to be able to drive automation into their processes. And then sixty eight of those respondents said within the past twelve months, IT automation at their organization has really shifted from a nice to have now to a must have. And so as you kind of draw the correlations in terms of the Gartner study, people wanting to move to the cloud versus being on prem. Mean, Karen, I I think that's where the must have probably comes into play. And I I think it's really interesting in that nobody argues with this anymore. Everybody knows we we should operate some of our apps in the cloud and they need to be automated. The bit where we see customers getting stuck is actually they're just getting started. Once they get over the initial humps, this goes from a must have and the mindset shifts and it becomes a why wouldn't we do it in the cloud? And once you get over that mindset, that things really start to get that sort of flyaway effect and people move faster and faster. Yeah, and exactly where they need to go. So we did want to do just a quick poll question. And if everyone could just take a quick moment, select one of the options and give them another second. So where do you think most of the people are going to land out here, Karen? Are they going to be multi region, multi availability zone or? My bet, I can't vote. But I would say probably two. So yeah, multi region with multi availability zones is where certainly a lot of our customer base is where we would recommend for an individual application. And then as a business, obviously, organizations, everybody tends to be multi cloud. So I think it'd be an interesting split between two and three. Okay. Alright. So why don't we go ahead and close down? We've given people a lot of time to answer the poll question. Let's see what we come back with. And waiting for that to pop back up. Okay. Let me share it with everyone. So here, it's kind of interesting, Karen. Pretty, pretty big split across across everything here. Kind of real. I mean, one of the things that kind of strikes me as a surprise and maybe is the would be the unsure of kind of where they are, but it's good to see that there are some multi region, multi availability zone, and the multi cloud kind of not surprising there, but those obviously have some interesting challenges unto itself of being able to synchronize data across those different architectures. Great results. Thanks for everyone for participating in that. And I think that was very useful. So you hear a lot of stories around shared responsibility. And within the cloud, that's very prevalent because the cloud providers are only gonna take you so far. Actually in one of my past lives, we did a survey out to a lot of respondents to say, who manages even things like security within your cloud environment? And it was amazing that a lot of those people came back and said, well, we think it's the cloud service provider. They need to maintain that. Well, that's partially true. They'll do it up until, depending on how much services that they're actually doing, they'll protect their infrastructure. And if they have a platform as a service, then maybe they'll take it all the way up to there. But it's really incumbent upon you as a enterprise owner, IT owner, to be able to manage your own workloads, manage security, depending on the different architecture, especially if you're in an infrastructure service or a platform as a service. So you can see as the responsibility and the ability to customize kind of diminishes as you go across the spectrum. You own it in your data centers, you can do kind of whatever you want, but as now as you're using as a service, then some of the customization and certainly the responsibilities you have diminish, but they don't go solely away. Right, Karin? Yeah, and I think that's where there potentially can be a bit of a banana skin in that we all want to remove problems and specialize in the piece that our customers particularly want. So I think every software organisation is using more and more as a service. And you do just need to be careful that you've looked if they are critical to your business operation, that you've wrapped your arms around them and you understand their resiliency posture. Because what happens if they go away? What's your backup? What does it affect? And it can be quite easy to forget because services in the cloud do tend to be up the majority of the time, but it's what happens when they do go away, or you can't scale them elastically for whatever reason. Yeah. Exactly. So let's just quickly talk about some of the drivers for automating your tech resilience stack. The easy one, when you're trying to tackle that complexity and scale of big cloud computing, you have to deal with horizontal scaling, you're moving from large physical servers now to very virtualized container type of architecture with distribution across multiple regions, or certainly availability zones. You need to be able to recover your failed services faster. So even though you're protected to some degree by the different architectures that cloud providers provide, give you, you still need to provision, configure, and be able to test all of that in confidence. But if your application fails, you need to be able to make sure you can recover that because a cloud provider is not going to provide that level. They don't know your application, they know the infrastructure below and they'll protect that, but you have to build in the hooks to make sure that you can deal with that cloud recovery. And especially if you're in the financial services industry, then you have the whole regulatory aspect that you layer on top and even makes it more complex for you. And then certainly being able to reduce operating expenses. If we can reduce the number of manual steps that humans, and if you look at where outages come from, a lot of times it's from human errors that are happening, you wanna remove the whole configuration drift and really just kind of get to, you know, a faster, more proficient way of troubleshooting and knowing where your source of execution is. And that's really what we're here today to talk about is changing the model of how you're going to do your automation in the cloud. And yeah, I mean, just to touch on that, and on recovering failed services faster. I think that's where the physics in the cloud changes. And there's a great paper out there saying that service the on prem servers are like pets and cloud providers are like cattle. The advantage of a pet is you probably get some indication because you're you're looking at it every day. Is it getting sick? Whereas cattle, you don't. And this is true in the cloud. Your mean time to failure is probably shorter in the cloud. It's just that you can recover at software speed. So you actually need to plan for things going away and make sure that your software is doing its best to auto heal because that's something you can do in the cloud easily, but perhaps you couldn't do on prem as easily. Yeah, it certainly can fail over faster, but you're up and running, you're still collecting revenue or whatever services that you need, especially for this business critical, but you need to get that other side up very quickly. Otherwise, you're going to be in a lot of trouble because you can have two downsides. Okay, let's get into a little bit more of the meat and let's talk about how our automation can assist our customers with their technology resilience. So Karen, I think, you know, we want to talk about more of a kind of the CSED model. You know, I think everybody thinks it's kind of nirvana. We did pull out some stats that came from the continuous delivery foundation that a lot of developers, about seventy five percent of the developers are involved in some type of DevOps activity. But what I found was really interesting that only forty percent of those developers kind of really in that study that they did, which was very recent, use continuous integration or continuous deployment. They use one or the other, but not both. And only kind of one in five use both of those different types of methods. And I think, yeah, that's why we sort of called it Nirvana. Because it's what you pull anybody aside and say, do you want to get to this perfect state of perfect continuous integration and perfect continuous delivery? Everybody will say yes. You know, it's fairly obvious. It's where we all want to get to. And yeah, over half of the developers surveyed here don't. And it's when you realise things get in the way. If we take a very simple one, modern software dev of developing and distributing an application in an app store, there's an app store submission process in the way that is actually manual and it's out of your control. When you're shipping things into stores, you've got physical hardware that might need updating as well as the software. So there's often those sort of friction barriers that that do get in the way or mandatory checks, reporting to regulators, security audits. We want to shift them left as far as we can. We're getting better and better as an industry. But unfortunately, we're not at that point yet where pushing code is as exciting as turning on a light and just flicking the switch and it's there. There are these friction barriers in the way that we have to get data to deliver business change to our customers. Exactly. So as we kind of move forward, love this picture. Not sure who actually drew it, but, you know, of goes back to the whole Nirvana aspect. You spend a lot of time on a task and you're going to automate it. Hey, free time. But Karen, is that a reality here? Yeah. It's an XKCD cartoon and it is one of my favorites. So you can ask the techie in the room. We've all got that theory that once we grip the automation, life will get easier. And that's to some extent, it's true. The problem is, is once we've automated one piece, we've now got two pieces of software that we've got to maintain and make resilient. So actually that automation takes over. You know, there's only a small bump there. But even if it was perfect and we didn't need to suddenly assign somebody to it, There is a maintenance cost of that. So I think really thinking about where you can get your bang for buck with automation is a really important point. Yeah, exactly. So as we kind of lean in on that, you know, we're going to talk about some of the domains, the cloud domains that really can help with anybody's tech resilience posture in the cloud. And the way we look at it is there's kind of like four domains, arguably you could say there's one or two more, but for the most part, these cover the vast majority. Certainly the recovery aspect of it, being able to practice like you play, you need that confidence going in that you can seamlessly fail over your apps and then get those back. Your application test and validation. So even just performing that kind of functional, especially when you get into a microservices type of architecture, how do you deal with that end to end exception? Coupled with that is the configuration and provisioning. Where's that golden source of truth. How do you know that, for every service that you have or every workload that you have, that there's no human error in that. And then finally, it is more of the life cycle management, being able to deal, security patches, upgrades, being able to scale out things of that nature. So these are the kind of four domains that we see. So as you kind of take that, so Karen, there's a traditional way, and then there's maybe kind of a new way in terms of how they should be looking at dealing with those automation domains. So I think the really important point here is you need to make sure that things are repeatable. If we take this approach as we're showing with the traditional approach is that you launch the application and then you configure it. And if that's being done by a human, you do it a hundred times. Let's assume a five percent error rate. So every five in a hundred releases or one in twenty, you're gonna have a problem just through human error. As we move that to sort of a more modern way of working and you do this where your configuration files are the source of truth, and we take that sort of more GitOps approach. The advantage, and I think it's one of those things that developers we we sometimes forget, it's a superpower we have, is source control. That ability to see atomic commits and where configuration drift has happened and having it stored in source control, but then enacting the humans to take that data and actually make sure that is the data that's supplied. You can see we are applying that data set to your application. Just means if you do need to roll back, it's easy because if you've misconfigured the application, does the application store a log of what its configuration was like? Possibly not. And so the ability to roll back quickly suddenly gets a lot easier when you do have all of this in source control because you can see what the drift was over time. And that provides a great learning opportunity as well for everybody involved. Yeah, and it's also just even making sure that, you know, whatever changes you're trying to apply, you're gonna apply multiple times. And at the end of the day, the result has to be the same. That's what we're trying to really drive here as we move towards a kind of this new way of automation within the cloud. And that kind of takes us into, okay, so how do I kind of take that in and enable that for my resilience posture across? So Karen, maybe just kind of walk us through kind of- Yeah, so as we're sort of showing here is having that stored file in your pipelines all defined in code. That includes the pipelines being defined as code. But the important that that we think is actually bridging that non engine engineering gap in that if we go to the perfect nirvana of DevOps, that can be great. But it's almost like the lights go off in the room. You know your application can deploy fast, but you don't know how it happens. And it becomes a black box to the non engineering folk. The engineering folk have that term of RTFM, and we need to make sure that doesn't get applied too much because you do need to understand how your automation is delivering your service so that if something goes wrong or you need to make a change or if something breaks in the pipeline, these pipelines can quite often be brittle if they're very long. Having a task executor that enables you to execute multiple pipelines to achieve your aim means if one of them goes wrong, you can either do it manually or just fix that minor piece of the pipeline rather than one very large pipeline. Yeah. And certainly bringing in that non engineering aspect, because that's really where you get the more of the creativity. If something were to go wrong, you needed to deal with management approvals or things like that. How do you involve the kind of that non engineering business person into that whole And, you know, going back to the cartoon picture that we had, the automation's gonna free me up. You need to have that, you know, kind of person, that kind of voice of reason, which automation doesn't necessarily have. Yeah. And as as good as AI might be, it's all very linear fit even with generative models. So it might not think outside the box. So for something like failovers, if your cloud hosting's gone down, probably what's the more likely reasoning? It's more likely to be that an internet cable's been cut to Ireland or somewhere similar. You know, a human might make the judgment call of, well, we'll rent a hotel floor and we'll just set up a data center there by hand. And that might be a recovery plan. It's a hard one. But a computer is never going to think of that, unfortunately, yet. Now that this may be the AI or learned. Yeah, there's other things a human can do that a machine would never think to do. Exactly. So if we take that model, and we apply it across those domains, now we get to something very interesting in terms of how we can, you know, not only automate, but now achieve some of those operational efficiencies that we want, reduce costs, and really remove a lot of the configuration drift. And this is where I think it really gets interesting as we can apply those disciplines across different domains within the cloud. Yeah, and I think as you say now, Walter, it doesn't have to be all or nothing. You could just have something in the provisioning stage. You could just have something in the test phase or something in the recovery phase. And it's back to that point of getting started. Once you start to get started, these things will stable. It is either what's the low hanging fruit in your process. We quite often see that actually just identifying that weak link in the process is a useful exercise in itself. Because what you don't want to do is where perhaps even myself as a developer, there's the temptation to go and do something that's a big percentage win. So we might say that, right, we're going to alter our Cypress or Selenium suite for tests. And in this flow, maybe that takes an hour, but then we find out our security process takes two days. So I can go and optimize and take that one hour down to one minute, and it's going to be a bit immaterial. We should go and have a look and see if we can automate something in the security process first, because we're going to get more bang for our buck there. I think it's find out what piece in your life cycle it is that's your biggest bottleneck and try and optimise there first, rather than optimising something that might be, you might be able to get massive gains, but it's not that weak a link in your chain. Yeah. And what I also find kind of interesting, maybe Karen, you can kind of comment on this, is that engineers, developers can have multiple paths in. And so how do you try to contain that and have kind of that, you know, single place of execution? So how should developers really think about going about this automation? Because one guy could say, Hey, I got this tool to do all my manual tasks. I've got this. Another guy says, or gal says, I have this tool. I see chaos. And I know that's not chaos engineering, but that's what I see. I think it is that how do you, lights going off inmates running the asylums is the book on this one. I think where that is, is it's cross team visibility. If you go and ask the development team or business team, everybody actually just wants to deliver something great for their customers. It's having that visibility. Some of this visibility can be a tax for the development team, because if we're making it so that non eng can view, there's probably some upfront work your developers need to do, and they need to pay that tax. But it's usually a quite easy sell, I find. If you say to the developer, what would you rather do? Spend half an hour now, and make it so that this operation is visible? So maybe our test automation is visible. Or do you want somebody at a stressful moment in a recovery or a release tapping you on the shoulder saying, are we nearly there yet? Are we nearly there yet? Are we nearly there yet? And most of you developers will turn around think, okay, we'll solve it now and pay the tax upfront rather than paying the tax when it will suddenly get very expensive at the end. Yeah, that's actually a real interesting kind of comment that you made around having that visibility, bringing those stakeholders into the whole process. And I think that's where, you know, we kind of go back as what we've been talking around here today, leveraging those humans to fill a lot of the gaps in these CICD processes, because like we said, automation, CICD is nirvana, but we know that's kind of a failed concept and you need the people to be able to help with those decision points. And as you were saying around the observability, around the stakeholders and trying to get them to bring in. And I think this is the point we've been trying to make is one, this maybe as a service or the business requirement of actually delivering what your customers want is usually. It's made up of multiple services, all of which can have, you know, we've of zoomed out now, but each one of those is that sort of you're working towards that CICD nirvana. We're just showing three here. But the handoffs between them, it's either things within your company, so sort of chained bureaus or similar that maybe still need to happen because they can't be automated. Or things where you are just got regulatory or third party certifications like the app store I mentioned or requirements of reporting to something that's legacy. Big bang events. As humans, we still deal with time a lot. And there's big bang events or days when people aren't going to be working like Thanksgiving and Christmas. Working around those things and knowing where they are and where things land is just really useful. And back to that point, you don't have to automate everything, especially if you know you're on a modernization journey or you're going to retire a service or similar. It probably isn't worth automating now. Go and get the one where it's either going to be a long lived service or it is the thing that's going to get you most bang for the buck long term. Because unfortunately, none of us have a magic wand. We can't automate everything tomorrow. So we do have to make that judgment call as to what we're going to do. A lot of developers out there would just like to code everything up and, you know, maybe not look at the, well, okay, it's manual, but it makes my life, you know, maybe not so much fun, but, you know, the cost of doing that is unnecessarily high. Net new code is always exciting for any developer because there's no legacy to deal with. So steering away from potentially the greenfield stuff and getting into that, the mucky brownfield quite often is where where a lot of the benefit can be for your business. Yeah. So now it's clear that as much as we wanna automate, you know, bringing the people in is gonna be, I think essential to success of any of these types of programs that are running forward. So with that, you know, we have a number of different types of recommendations and, you know, these might be things that you think about, or, you know, you don't think about every day, but, know, it just makes sense. And, you know, as you were alluding to earlier, know your goals, you know, where are you today? What do you wanna do? And I think that's really kind of essential because that gets into the whole cost benefit. Being able to start small, but think big. So take like little steps and then kind of continue to build on that. And I mean, that falls straight within the whole CICD pipeline methodology that you would have. Certainly, you know, address the tooling or have tooling that addresses kind of the aggregation. So don't have different pockets of different types of maybe orchestration type of tooling. That's gonna bring everything together. You wanna be able to aggregate that. So all of your tasks, whether they're manual related or automated related or all combined together. And that couples, you know, as we've been talking, Karen, certainly with bringing the business and the developer teams being a very strong kind of partnership in there. And for them, they need to identify what those functional requirements, either in the management or the resilience of their cloud journey. They have to have that common, maybe you can comment on this one, but having that common template repository, whether it's in Git or some other kind of source control, I just don't see how you can be successful if you don't kind of start there and then drive through. I think that it's having a common template repository or good way of working or tooling that helps everybody. One enables the teams to see why is that team going quickly? Oh, it's because they've templated this process. Being able to share that enables another team to get going quickly. And as we sort of said, it's really right to start small as well. This thing builds a snowball effect. There's this very much temptation that the way to get this done is to bring in a new team, bring in a new tool or bring in outside help. But actually it's a combination of all of those things because you just need to get going. And once you get going, we regularly see a snowball effect. That template repository is just a really good way of having shared learnings and that people can improve on. If you don't have it, what tends to happen that we've seen is companies will adopt infrastructure as codes, a really common one. And rather than improving if we take that example of databases improving a database spin up service, there'll be one that's bespoke for each application. So you'll end up with if we had a thousand applications with the database, we'll quite often end up with at least a thousand and one ways of spinning up a database rather than what we actually need is one common piece that does eighty percent of the work. And then maybe twenty percent of manual on the end of it rather than trying to get to that perfect piece of a one hundred percent automation script. It's probably better as two pipelines. And so you'd have the common template, and then you'd apply your own template to it. And then having that gap between the two just means everybody gets to share the eighty percent of the work rather than everybody having to reinvent the wheel. Yeah. Yeah. Exactly. And so that certainly take us, you know, just kind of wrapping up this section is you need to have those automated operations. Your whole goal is to reduce errors, especially around the human side. And thinking about your resilience strategy as code and how you kind of drive that, not only for your recovery, but your whole life cycle management to your provisioning, your configuration, your test and validation aspects, and certainly maintaining, making sure you can meet a lot of the regulatory aspects, certainly reasons to have it fully automated in there. And really, if you look at those repetitive processes, deploying code, maintain canaries to monitor and test those applications, performing those failover recovery testing, those are essential. And I think we covered a lot of that today and which I think is extremely important as customers, our customers are certainly going through their journey on moving to the cloud. And so many of you may not know who Cutover is. We're actually a big provider for cloud disaster recovery, your whole technology resilience stack. And so we were founded in twenty fifteen with real focus on operations around the whole IT application state for IT disaster recovery, like full data center loss, partial loss, cloud migrations, complex releases, cyber resilience. And so some of our customers are the world leading financial institutions, so we're proven within the top three US banks and three of the world's largest investment banks. I think, Karen, we're certainly very innovative in some of the things that you've developed with your team, and being able to codify those relationships between people and applications, to bring them together into a certain spot, to provide that task element, and to be able to provide that automation, to actually collaborate in a systematic way. We're comprehensive. So as we're demonstrating kind of that new layer above infrastructure as code to really focus on what happens when, in what order, I think you can't be successful because if you have a mishmash of stuff that's happening and nobody knows what's going on, basically that's the old way. You need various kind of structured approaches. And then certainly our platform obviously is very optimized. So we have an open API stack, very wide ecosystem of some example integrations that we have, whether it's, you know, into your collaboration tools, such as Zoom or Slack or Microsoft Teams, into different types of applications, whether you have ServiceNow, Fusion RM, or Jira, or quite frankly, any type of SIM devices, and then your whole infrastructure layer with like Ansible, TerraFarm, certainly Git, that's where Cutover comes into play and provides that kind of middleware layer to provide that orchestration as you're doing your different types of integrations. I think that's, it's an important point Walter, in that nobody's environment is homogenous. You know, there's a variety of ways of doing these tools. These pipelines have been a variety of ways of working and there's not a right answer here. And that there's what's right for the particular application or business need. But actually that the resilience aspect is a requirement across the entire business. So having that introspection and a common standard again for how we recover things so that you can answer the fifty thousand foot questions a lot easier and have consistency across your business helps, because then everybody is starting to have a standardised way of working. So if you're in a shared region within the cloud and that region goes away, everybody knows how you'd enact a failover. And other teams that might be available can think, ah, it's this that they're doing. It's this that they're meaning. And they don't need to know the detail if that recovery plan is there. If it's automated, they can kick it off or it can already be kicking off and that they've checked it. Or if it is still manual, there's a chance that somebody else can pick it up and run with a baton if you've standardized across the things. You just need to know how to use the tool and where the data is. Right. And also even, as you mentioned earlier, which I think was critical in the automation framework that we've talked about here is really having that visibility of the execution analytics. What I mean by that, basically those dashboards, those kinds of stakeholder views of what events, what tasks are taking, how long are they taking? Are they running overboard? Or are they within the forecasted expectations of your recovery time objectives, or what are your recovery time actuals against your objectives, different types of those views, as well as, if you talk, because we deal a lot with the big banks, certainly regulatory compliance is critical and having kind of a automated immutable audit trail, that you can just spit out and prove to not only your senior stakeholders in your organization, but also to the regulatory, because they don't care if you're on prem or not. They're still going to ask you the same questions. And I think to the earlier point and on the analytics piece, as you say, you can see where your bottlenecks are and how you've improved. And just seeing something out a manual task for an automation one or a better automation or thinking how you can paralyze things in terms of the sequence can really help you reduce that time to recovery so that you can meet your recovery time objectives. Absolutely. And for everyone that wants to go off and learn more about kind of what we've been talking about here today and maybe how and how Cutover can help you, go off and visit w w w dot cutover dot com. There's a lot of items that you can read about in our resource library. We have a bunch of videos, talk about some integrations. There's also an ROI calculator that you can go and visit. So anyway, take your time and visit that. And we'd love to have you come by and book a demo with us so we can show you more. So with that, Hannah, I think we'll pause here and that was the end of the presentation, but we'll take a few questions from the audience. So there's a couple here that we have. So one was, let's see, would it make your application and Karen, this probably makes one for you to answer, probably all of them will be for you to answer. Would it make your application more resilient in a single cloud provider? Or should I use multiple cloud service providers? Yeah, it's a risk based decision here. I mean, I would say for a single application, it's far better to go deeper with one of your cloud providers. So an example, think we mentioned it earlier. An application is is made up of lots of services. And rather than failing over the whole application, if you can spend the time and go deep and think, how do I fail over the queuing system? Or how do I fail over the messaging system within a cloud provider? You should find, I would think that that will be faster than trying to fail over between two cloud service providers. And there's also the internal costs in the data regress usually costs more. And just the sync times between data centers can be slow. So I would go for a single application, go deeper. And then to increase, well, lessen your chances of all applications being affected to make you more resilient is maybe spread your applications across the cloud service providers. So each application might be in an individual one, but don't put all your eggs in one basket. Yep. Here's kind of an interesting one that just came across. Does the automation path, and so I'm gonna paraphrase a little bit. So the automation path that we talked about, and I think this person has their, all their apps are still on prem. Would basically what we talked about from a technology resilience kind of automation framework, would that still work on on prem? Yeah, completely. In my opinion, lot of what we've been talking here is process optimization and actually identifying how you file over your application, who files over your application, when you file over your application. They're the same questions you're gonna need to answer in the cloud. All that being in the cloud has given you is the ability to do that faster in a more software defined way. But actually, fifty thousand foot level questions are the same, and you can get going so that by the time you arrive in the cloud, life will be easier. Yep. Okay. Then we have one more. It was all around the shared responsibility model. How does that tie into technology resilience? How does it tie into disaster recovery? So yeah, the shared responsibility model is, I think you touched on it earlier, Walter, the cloud service providers, you're responsible for your application. So at the top of the stack, you have to make that piece secure, and you have to make that piece resilient. A cloud service provider doesn't, know, by design, they don't know your data. They can see that you're moving data about, but they don't know what. So you might be able to discard certain things in a queue because you know they're all going to be able to be recovered as you fail over. And the cloud service provider just can't know that. So that's on you as the application owner. You need to know how your application is going to fail over and which bits you can lose and which bits you can't in terms of data. I'd always start with the data. That's usually the best place to stand up. Okay, very good. We had a couple more come in, so catch your breath. So one is, think the person doesn't necessarily know a lot of the automation tooling that's out there kind of doing some investigation. So they were wondering, you know, how can they get going? What would be your advice as, you know, these customers are gauging in their cloud journey, trying to become, you know, obviously more educated, understanding of, how can they do it with the greatest efficiency. I think the main advantage here, if you don't understand the tools and as we've been sort of stressing is have a look what the weakest link in your chain is. The capital cost of experimenting with these tools, and a lot of them are open source. So it is just your time where the cost is going go. The advantage of doing things in software is it's relatively easy to experiment. Think don't be afraid to get going. Run a couple of small experiments. Think if you ask within your organization, you'll usually find somebody has either wanted to do this or has done it as a side project for whatever reason, and will suddenly become a champion and educate your org as to how to do this. This may have come, I think we have one more question. So there was a question about the automation tooling and kind of at what stage in the hierarchy will give them the most benefit probably if we take it from the stack on down. For me, it's definitely that infrastructure's code. I think adopting that and having reproducible environments quickly just enables the whole business to be more agile and that you can suddenly experiment on cloud like infrastructure in a sandbox. And the ability to spin up that sort of infrastructure quickly means, again, that sort of the risk goes down. If it was going to take previously three days or three weeks to stand up representative infrastructure before it was done with code, where certainly you can get that down to thirty minutes or even less, people will be willing to take that risk and run the experiment and find out something new. So I think that's the place I would start, quickly followed by automating your testing. Awesome. Well, thank you very much. And that was the last of the questions that we had right now. I think, you know, hopefully people have a greater understanding on how they can leverage automation as they move through their cloud journey with particular reference to technology resilience. And if there's no more questions, Hannah, I think we're probably good for today and everyone thank you for your time.