Perfect. Okay. Hello, everybody. Welcome to this PCI sponsored event titled how to automate IT disaster recovery plans with runbooks. So in this webinar today, hosted by Conover, we're gonna talk about how your IT disaster recovery, whether it's on premise or in the cloud. It's just really becoming increasingly complex, but also very essential. And so here, we wanna give you a lot of information on how you can leverage things like automated runbooks to create a better way to streamline the recovery process, be able to reduce human error, really mitigate mitigate risk. So in this webinar, we're gonna be talking a lot about some of the merging, disaster recovery challenges that you would have within your tech stack. Some of the benefits of unit using automated runbooks as opposed to, static, plans, whether in Excel or in a, word document So we're gonna help you look at what those best practices are, some of the frameworks, cross your technology recovery stack, and really how you can get better information, real time visibility, and also regulatory audit proof and compliance, within your recovery plans, which is becoming increasingly important. And then we'll also talk about, you know, just some of the other areas on how you keep all the stakeholders involved in the integrations that you can do, across the tech stack. So in this webinar, we have, sorry, we have our Chief Product Officer, Marcus Weldsmith, He's gonna be going through a a heavy load of the, content today in combination with Karen Goodtridge. Our CTO, officer here at at Cutover. I am Walter Kenrick. I'm vice president of of product marketing. And so why don't we go ahead and take it away? So Marcus, why don't you kick us off here? Yeah. Thanks very much. There, the first bit when we look at this is really trying to understand kind of why this is a why this is a a challenge and, you know, and also why it's a why it's a worthy challenge to to go after. And I think what we what we see today is just a a huge amount of architectural complexity. In terms of the amount of choice that organizations have in terms of choosing what technology that they want to run, to support their business, and the fact that actually there is no one way answer. And so when you look at the complete stack, There is so much choice with kind of on premise and clouds and deep these different ways to operate that that that actually kind of you end up with a a kind of very mix around how you approach this. And I think, Erin, we've talked quite a lot about you know, why this is important and why, why it drives us to have kind of fairly complex discussions with our with our with our customers around the approaches that they're taking, and maybe you could kind of add a bit of flavor to to this in terms of what what we see as as the challenge people are dealing with? Yeah. Absolutely. I think the biggest piece, if we look at the the bottom, where you have to manage everything, and I think it's the sort of pets versus capital argument in the when you're down at the compute storage and network layer, if it's on prem, you've got people that know where the servers are, know where the cables are, let alone the software that's running on top of this. When you look at the other end of this and you look at a full SaaS service, you actually could be completely blind as to how it's even being run and you are literally just consuming it as a service. Each has its own benefits and downsides with regards to disaster recovery. If you're naive with the SaaS solution, it might be that your vendor isn't offering any sort of recovery between regions. If their cloud provider was to go down in one region, or even one, zone, you you might have a problem. Whereas if you understand where your computing storage is, then you would know if it's only in one boat center or whether it's spread across multiple sites. And to the gamma in in the middle just means that there's a lot of complexity, mainly because you're probably gonna have some applications and some services that you're consuming as a SaaS, and certainly the majority of our customers definitely have some on prem applications that will never be moved for data residency, latency, and all sorts of other requirements. So these applications will stay on prem. Yeah. And I think, you know, we we kind of obviously sit in a position as a as a SAS, provider ourselves. We we kind of post cut over on on AWS. And what we've seen, over the last couple of years is, you know, is a mass have increased in the in the interest in how we're resilient and what our processes are. So, you know, no longer are we are we seen as a box, and I think that's the same for any other any other provider, organizations start are are really caring now about. Kind of their their SaaS solutions and how they are being resilient. So even though, those organizations have no particular control over that that resiliency, they still want to understand what the process is. And so, kind of really, kind of this leads to quite a lot of complexity and and Walter if you, jump on to the next slide. Oh, sorry about that. I saw it. Here we go. Yeah. I think what we what we see is that you end up organizations end up with a with a mixed technology estate. You know, this isn't kind of a a view of of kind of moving from one to another. It is the fact that, actually, it's very justified that organizations choose different solutions and different architectures for different types of application, partly because you're choosing the right solution to solve the right business problem, but also around how you optimize for cost. As well. And so what you end up with is, you know, a set of services or applications depending on terminology, that actually sit in a number of these different boxes. And, you know, I think very few organizations would say that they that, you know, they they sit in one And, you know, this is also used to to think about how how organizations transform their technology estate and, you know, Kieran, I think you'll, you know, often get involved in in that sort of, how do you modernize and maybe there's definitely a path within this, isn't there in terms of how how applications or services may may move through these different options. Yeah. That's basically if we just take the moving an application that we've decided we could move to the cloud, and you think that that's a way of getting easier disaster recovery because you've got software defined infrastructure. You can achieve that either you've just lifted and shifted, and you continue to use sort of the second pillar in here, which is you're in the cloud, so you're no longer on premise. And you're using some sort of infrastructure as codes, or cloud formations, the terraforms of the world to produce if you're moving to AWS, AmIs that are abaked and you could deploy into any region versus what we see quite a few of customers doing is containerizing the applications on prem. So we move right over to that right hand stack and run their container runner. And then eventually they just move the container runner to the cloud, and they run on one of the managed services for running the clusters. And that can lead to a lot of difference in that he's somebody now running the platform. So if somebody is running that container service, either it's yourselves running it, or, if you've got it as a managed service, you still need to understand that and who is going to agree to do that piece on top of when your application is sitting on this. So, versus where if you're in that middle container where maybe you've adopted a lot of serverless technology, maybe it is very much that the development team or the service team depending on how you configured is actually adopting the you ship it, you own it, you fix it. Sort of approach. So you've got a gamut across here and knowing who and when and how to call people in the event of a disaster becomes difficult based off what a quite sane choice is even at this level, but in the event of disaster, you need to know how and where an application is running. Yeah. Thanks. And I think that then leads on to kind of the stance that we take in terms of, well, well, what are the challenges that that that gives? And I think the p the couple of couple of pieces before, really around that kind of architectural complexity, that's driving. And I think that that pushes a a bunch of other stuff. So, you know, you you need kind of a different set of skills for those different different types of architecture. So you're actually you're adding complexity, but also adding team complexity in terms of, kind of who's available to manage and improve each of those, each of those environments, the fact that you, you potentially are then kinda looking at IT DR testing, in kind of slightly siloed or isolated ways because it needs to be different for the different types of architecture. And I don't, you know, Kieran, any any particular ones that you'd you'd pull out from here as well. Yeah. I think my particular favorite, particularly where runbooks can help is that consistent health checking and automation because they're quite often linked. If a service is scaling, you might have started off early on when some things are pet, that you've got daily health checks and this can be quite a manual process. And as you scale, you probably find that the time you've got available to do that manual health check for each pet goes down, and that isn't ideal. So you start to think automation. I think if you when we talk to our customers, everybody wants to automate more either because it's no longer possible to do consistently manually so they want to reduce risk or they just want to reduce cost and reassign those people to something more beneficial than sort of doing daily health checks, but it can very quickly become a full time project. And it can be hard to get going. So I think one of the things we have seen with our customers is where they've gone from daily health checks that were done manually to using tools like Genios and Anspiel to actually do health checks. And then the team that was doing those can spend the time, either writing or automations or really digging into the false positives that those tools might supply. And having the structure, on what to do for your automation. One of the things I'm a big believer in if if you're fully manual today, is sort of the the do nothing scripting, which just lists the steps that you are doing, and and breaking tasks down into smaller tasks wherever possible so that the skilled resources can get going on the easiest piece and identify the piece in the chain that will be the best will get us the most bang for our buck for automation. Either because we're always blocking and waiting for that to happen. And when we have done the automation, it could be that if we can throw more compute at a problem so that you can run more workers. We can paralyze more and more of this automation. And suppose just that piece of getting going is always the hardest part, that we see within our customers. And, you know, I'm very all is very conscious that we don't want to just take our observations from the people that we're I'm working with day to day, and I think that kind of gets us on to, a survey that we we recently ran to really inform, you know, is the section of of what we're looking at with our customers, kind of representative of what's out there more widely and kind of water. If you could introduce that a little bit more, that would be great. So, what we did was we went off, to three hundred people across a number of different verticals in the US, as well as in EMEA, pro predominantly and really looked at, from this IO down to, you know, people that are managing the applications network, We went out and surveyed them to find out a number of different areas, whether it was on automation, investment risk, cybersecurity. So we pulled out some of the, stats that we thought were pretty relevant at least at at this point for sharing, with the BCI constituency. So for, in relation to what we've talked about earlier, we're talking about, okay, some risks and challenges with the organization, especially with the out, outdated disaster recovery procedures. So here, we went off and surveyed, just recently. The report hasn't even been fully flushed out yet. So we're you guys are getting their early early view of this. But we ask, what are the three biggest risks to your organization from an outdated disaster recovery procedure. And so, you know, Karen and Marcus, we clearly saw that the cyber attacks the increase of vulnerability. I don't think that's a a huge surprise as you've seen things like ransomware, you know, deduct attacks coming up. So you need to be able to be able to recover that. There's a lot of regulatory. But one one area that I I did kind of find pretty interesting was to at fifty percent, everybody chimed in at the intensifying problems due to continued failures. I thought that was kind of, eye opening that these organizations haven't really built resilient architectures, and so they do need to spend more time looking at the recovery prospect perspective. But anything, stand out to you or any comments, that you guys would see here? Yeah. I mean, I would definitely agree cyber. I mean, I think the fact that those processes tend to be complex, not automated, you know, and the need to go through kind of a step of of isolation and and understanding kind of what the what the attack was and having a solid response to that being something that is, you know, very hard to automate, because you don't know what it was to start with. I think it's definitely something that we we're having a a kind of very large number of conversations about in terms of what does that mean when you try and kind of apply runbooks and runbooks and automation to to that process, I would that definitely resonates. So I'm not surprised at all that that's that's, that's come out top. Karen, any any from from you? I think the first two almost inextricably linked in that as you adopt software, you tend to adopt even more software. So, therefore, you just could not even be updated disaster recovery plan, but as you've adopted more software, you probably just need more disaster recovery plans. And with shared services, and I think it links to the continued failures is if you've adopted things, CVs are getting more and more prominent. There seems to be more and more every day. So your patching cycle becomes continual. If we look at recent, you know, bugs in or CPUs in open SSL and similar, logged four j that they're common libraries that are used across many applications. And therefore, you really need to get sight of which applications are affected. Which could potentially therefore fail? How can you ring fence to them? How can we recover them? Quite often the means of recovering them, with microservices is is restarting them, and and redeploying them in its ordinary way. And these things are all difficult that they need organizational change, and they need a lot of coordination because if it is a shared service, you need to know to find multiple stakeholder stakeholders, that you're gonna do this and coordinating those. Right? Thanks, son. So we also took a look at some of the key barriers towards automating, the disaster recovery in these organizations. And here again, you know, we we gave him an option of, multiple choice in terms of what those barriers or, objections were towards the automation, moving away from static, type of environments. And you know, what we what we found, and and Marcus Karen, and I'll ask you guys to to comment again, but, you know, kind of set up. People don't really know where to to focus their their efforts. Where do they which services do they prioritize? And then do they even have the skills needed to automate some of those components. Those clearly came out at at the at the top, of that. So and I'll ask you guys to maybe chime in, give a little little commentary there. Yeah. I think number one is my favorite. Again links to two, but I think it's the technology and the technologist in the room. It is there's a temptation particularly within your development team to go and cut new code and solve a problem, is what developers are trying to do all around the world and knowing where best to focus them They might focus on something that recency bias is is a real problem here. The the thing that caused them the most pain recently. Might not actually be the best bang for the buck because it might be something that only happens annually, whereas if they were to automate something that goes away daily, they probably have more time. So I think knowing and having data rather than gut feel, as to where to provide or what what is gonna be the weakest link in in a whole change, across your disaster recovery processes, is important because otherwise you might optimize something that individually looks great because you've taken something that took ten minutes down to one minute, and you think that's brilliant. But if there's a process in that recovery procedure that is either waiting around or three hours for somebody to pick up or takes, you know, hours, to actually achieve and probably better trying to break that task up and automating there. So having the data available to you rather than relying on opinion is really useful. Yeah. And I I kind of, I definitely would pick up on the the finding suitable vendors. So we we would typically look at how we integrate with automation, tools, and we we see choir mix. We also see quite a lot of change in terms of, organizations deploying a solution and then finding it doesn't scale in no way they want it or doesn't solve quite the problem. And so I think, when it comes to the the the topic that we're talking about around automation runbooks, I think having the flexibility to, kind of jump into different vendors and not be tied to anything particular, is really important here because, as we kind of touched on it right at the beginning, different types of automation and different vendors work well, work well or not well with different types of architecture. And so you know, it's not necessarily kind of picking one solution. It's the right solution for the right problem and the need to be able to kind of tie kind of any automation around the end to end run book of recovering, into those tools, whatever whatever they may may be. So I think, yeah, interest, interesting data, and it's kind of always good to look a little bit wider rather than just the the the set of customers that we we deal with day day in day out. So I'm really useful for everyone who responded to this. Yeah. And I also think, you know, there's the they're tied together, but the buying from the leadership and the the budgetary constraints We're kind of surprising that those were picked at least over a third of the survey respondents selected that, I would see that they wanna try to cut costs, certainly using automation so they don't affect any type of, reputation, financial loss that they would have. I would probably expect did that not be a a constraint or barrier, but, obviously, it isn't, based on the survey. Which is, to me, somewhat surprising because automation reduces costs theoretically and get rid of all those manual manual tasks. So kinda interesting data as you as you said, but let let's move ahead as we move forward. So, Marcus, we're gonna start talking about more around the runbook automation. Yeah. Thanks, Walter. You know, so so far, we've probably been really just setting the scene around kind of why why it's needed and why it's hard. And, a lot of that is around the kind of underlying complexity and architecture, and what we see is that organizations are really striving to architect resilient applications that you know, will update seamlessly, recover automatically, and there's a huge amount of tooling available that can be leveraged around infrastructure as code, automation, monitoring tools, kind of CICD tooling that enables you to take pretty large swathes of complex process and, and manual effort and get those into code. And we think that's the right target. But we kind of are very conscious that there do remain process gaps that need to be bridged, right, and that's what this diagram really is showing is that there are fantastic solutions, but there are scenarios, you know, such a cyber recovery and kind of, you know, unforeseen outages that that are of a cause that wasn't anticipated that don't automate fully, and kind of those really do drive the need for a more complex process, slightly higher level process that isn't necessarily a hundred percent banking. You don't quite know what it's gonna be. And also some applications that, being run just don't warrant the investment to get to a high level of automation. Maybe they're, you know, whether their their legacy or the way that they operate, just, you know, you don't want to invest it. So, you know, you end up with kind of a a set of things that you you still need essentially to have automated runbooks that that bind this stuff together. And, you know, what we're finding is, you know, those enable technology teams to codify the more high risk, more complex, and often fragile processes that, kind of, that you need to manage the the the technology estate know, such as recovering an for an event that causes outage, how do you how do you patch rapidly for an unforeseen and unforeseen vulnerability, you know, things like what's the process for how do you migrate between those architectures, particularly migrating to the cloud. And so, partly, you've got these processes that are are a little bit kind of unexpected and need some flexibility But also what we see is those processes don't necessarily run to plan. It's not a case of pressing a button and it just happens seamlessly. There are unknown and unexpected things. And so you need when you look at a runbook that kind of joins this stuff together, you need, essentially, a, a flexible glue that joins those process steps, with the people and the automations that you need. So that you can execute these things under pressure, but also adapt them quickly. And and that's kind of what we what we're seeing is the focus around, you know, and why why we're running this session and and kind of what we really we we really spend our time on is is what is an automated runbook and how does it solve those problems? Oh, why don't we why don't we get right into that? Yeah. I think what you what we really see here is that you want a set of tasks that you believe is the best path. To to to to recover that codifies, you know, the steps that need to be taken both with your your teams, kind of the underlying kind of technology that you have in the automations, but it needs to be, flexible that you could edit it on the fly. You could skip a bit. You could add something new, and do that in a way that's, feasible in order to ball. And and just essentially quick to quick to respond. And so that's kind of our view of what an automated runbook is. It kinda provides that fetch more glue? I think as you're saying, Marcus, it it really is that piece around being flexible in the moment. So whilst you're gonna have great prices, great automations in the event of disaster, something will be slightly out of kilter, and you're gonna need to adjust either before you start your recovery or you're identifying what the cause is. Or in the middle of your recovery when something else becomes more apparent. And so whilst that the bulk of the work is jumping between decision points. It's at those decision points. How do you have enough flexibility that a team in a high stress situation has the guidelines and and runners to really help them, but at the same time, they're not hampered by sort of perfect pipeline that might be incredibly brittle, and only able to respond to predefined inputs. Yeah. And then I think when we look at integrations, it's you know, it's very common to see, you know, a whole whole suite of tools that you that can be integrated to. And I think, you know, what we generally find is there's a level of com complexity beyond that. We typically engage with, large enterprise businesses. And we rarely find that the the tool that's kind of you can go to the website and look at it and read the APIs is actually what you have to integrate to for for security and scalability reasons, you know, there's often a a layer in between. And so the flexibility of being able to integrate to something that's, you know, probably not a native, tool, but has got some abstraction above it. Is is really key in terms of how you build those automation runbooks and the links from them to the the underlying tools and technology. And you know, Kieran kind of with your kind of CTO hat, I think this is pretty common for you, right, in terms of the way that you engage with customers to solve the integration problems that they have. Yeah. And I think most of it comes down to auth and auth within their systems, and that is both authentication and authorization. Some of these services might operate functionally, so they don't need an end user. Whereas more and more are seen for audibility reasons and just best practice in least privileged, they are operating as a user. And we need to check at the moment they're about to execute that the person is executing is authorized to do this. And as you say, that leads to quite often there being a layer that engineering might be aware of. But the non engineers are perhaps not aware that there is this facade or sort of firewall or some other restriction in front of the service, they they just want the service to run. And allowing both the sort of engineering teams to show that that there is a tax upfront and and pay that tax upfront via a runbook, lets them do it outside of the event of a disaster. It's a lot easier to, so do it then in in a zero stress situation than in the moment when it's a high stress situation. And the same with non edge needing to engage and explain to engineering why, this service needs to meet this RTO, and that can be more apparent when you've got the whole steps listed, and you've got visibility, you've got data, as to why you're gonna need to eke out five or six minutes here and there, which if you just look at it in isolation, might not be apparent and look like It's just work for worksite. Yeah. And I think then, you know, I think it's easy to understand that the challenges and and kind of the the, the runbooks and, and that the integration of those runs, books, to make an automated kind of solve, you know, addresses those challenges, but I think you only really do that if there's a there's a clear motive and a clear clear benefit and we really, boil this down to to to these four key areas. So, you know, one is around how do you have the the kind of the precision orchestration that even though the steps that you need to take can't necessarily be a hundred percent baked in at the beginning. And you may need to adapt them that you still need a level of precision in how the orchestrate who does what when, or, what order it gets done in, or what integration you need to kick off to to drive in automation at any point in time. So that precision is, is absolutely is absolutely key, and we see is kind of a a big, a big outcome that that we get. I think risk and risk reduction is a is a second area. So, you know, if if you're relying on more manual ways to orchestrate kind of that, that, that process, then you're much more reliant on individuals as single points of failure or the point of knowledge rather than having that knowledge codified in a repeatable way. And so I think lowering risk and you know, particularly the, reducing the risk, the risk of making a mistake, through manual error is a big part of this that you remove the cognitive load of individuals and kind of have a a have the best possible path that you know of, already ready and so people can focus on the the the tough problems around, like, what the root cause is and what are we gonna do next? And and how do we need to adapt rather than the the the baseline plan of what you think you should do and and and how to how how to automate it. So I think those those those top two are big benefits. And then Kieran, I'll let you talk a bit around integration or automation, but at the high level for me, it's very much. Or organizations have invested a lot already in automation, but it's sometimes buried. And so a big part of this for me is just how do you surface that and make it make it usable. I think that one for me, we quite often see if we take a service like Anspiel that's been adopted widely. The the theory would suggest that if we have a thousand apps and let's say three hundred of them using a database, we should see a reasonable amount of consolidation So you'd expect to see less answerable playbooks than you have apps because they're operating as a shared service. The practice, unfortunately, differs from the theory, and we tend to see far more by an order of magnitude and small playbooks for the amount of applications that are running. And that's usually just because in a large org, you've gotta syndicate that way of working and make it easily consumable both to Enjan non edge. And I think the advantage of having a runbook for that is you can look at how other similar services, just by filtering the runbook and thinking, okay, I'm running a service in an Azure cloud. How have they hooked up to us all dev ops or how are they using serverless functions and similar and find out where there are pockets of automation, either for standing up monitoring or recovering a service. And by doing that, you'll make the team's life easier. Think one of the nice things we heard from our customer is that by by levering some of the automations that were there, they can give some of the team their weekends back, which is always gotta be a benefit. Yeah. Thanks for that. I completely agree. And and then the final bit there is is just around at a dashboard and an audit, you know, having the visibility that you don't need to devote a lot of time to you know, asking, are we an elite area and interrupting the teams that are doing the work? You really want that visibility to just be seamlessly and and non interruptive, and also reduce the amount of work after after an event around the the the audit and visibility. So you know, trolling instant messaging and emails and pulling all that data together to try and create the narrative after the event of what happened and what do we do which is, you know, needed for continuous improvement, but also for internal audit and and sometimes the regulator is a really big part of this that if you have those steps codified and you have a way of capturing what happened when in a dynamic way, you massively reduce the burden of overhead around how do you give visibility to whoever needs it, either during the recovery or or post post the recovery. So kind of that's our our our view of the benefits. And I think the final bit really is just kind of I suppose just relaying this back to kind of our view of the world. In terms of what cut over is and what cut over does. So we are very much focused on being providing automated runbooks for technology, teams. We see that as being a base layer of capability kind of it's kind of shown here in the blue that has got the various things that are needed around kind of API and integrations to feed out to other systems, the the fact that you you need a set of templates of what's your what's on the shelf in terms of how do you respond to a particular situation, the ability to automatically execute. So potentially taking a trigger from a monitoring system and all automatically substantiating and initiating a the one of the one of those runbooks to respond to that event and in inviting and involving the people that need to be part of it. And we just covered kind of live visibility and reporting, an audit trail, and that kind of post execution analysis. So we see those as being the core kernels of of what kind of we do as an automation runbook automated runbook platform. But we then kinda see that fanning out pretty quickly in terms of where where, can that be applied. And you know, this we were talking in kind of very much here about, around recovery, which is kind of the top section and that has a range of different flavors to it in terms of cyber disaster recovery, care and touch quite a lot on health checks, but also how do you look at recovery in in cloud when you're you're kind of failing over between regions, for example, But I think migration and release are incredibly close, kind of relations to recovery in terms of Now, if you're releasing well with confidence, then you're far less likely to cause a cause an instant in the first place and need to recover, So that kind of spans upgrades patching, kind of platform, platform implementation changes. And then the the other side is really just the the ongoing evolution of the technology estate through migration. So How are you how are you moving those services applications between the different, methods that you have available to you And this is this is ongoing kind of all the time through the through the kind of the change of change initiatives that organizations have. And so we, whilst kind of, we're focusing here very much on recovery, see automated run but support kind of all three of these, and that's, that's very much our our focus. Yeah. And then kind of how does that actually translate to to to benefits, we've kind of worked quite a lot in terms of, numerical studies that we've we've we've done an an and have proven this in a in a number of, kind of, US, kind of banking customers and wider financial institutions around you know, significant, you know, fifty percent level reductions in in the recovery time to execute. Massive reduction in, how much time you need for reporting. So kind of don't involve people after the event with lots of kind of analysis, but have that to hand automatically straight away, and also kind of the the time to exercise. So you know, you're obviously when you you anybody runs a recovery exercise, you you want to plan it in a way that you know you know that it gonna, not interrupt normal business. And actually, that planning often takes a huge amount of time and preparation to, to get it just right. By having that codified in in runbooks can massively reduce that. And we've seen, you know, that almost go from one extreme from twelve weeks down to two down to two weeks in terms of planning a major, event like a data center failover. And so that's kind of what we're striving for in terms of making sure that we're, you know, not not just solving the problems, but that those deliver real benefits to the organization. Right. Well, thank you, Marcus, and and Karen. So before we, take some questions, we have a few in the, Q and A box. Where, the BCI constituency can find out more about cutover. You can go out to cutover dot com. We have a full resource library out there. Lots of examples of what Marcus and Karen were talking about today in terms of how we can help with the automated runbook Certainly, some of the integrations, more detail on that. We have a full developer API as well that you can go in peruse. And then there's a, ROI report as well as a calculator that you can go off and see some of the value that that you would get. So you can really help to understand where you can leverage the cutover, automated runbooks to really standardize and automate the the processes that Marcus and Karen were talking about today, but more effectively to increase that efficiency and am am very importantly reduce the the risk by bridging that gap or that glue between your your teams and technology with these automated runbooks. So with that, let me, flip over to the q and a. I know we have a couple there. So just bear with me for a second. Okay. So, One of the questions that we had, Marcus Karen, and I think both of you can kinda chime in here. So how do how do our cut cut over customers, do they manage runbooks from getting out of date? You know, they're all over the place. It's easy for these things to get out of date. So, you know, what should they do in in that regard? Yeah. I mean, typically what we find is there's an expected review cycle and that's something that's kind of set in as a as a kind of a an SLA and that we find that that generally changes depending on the the priority, of the or the tier of the the service. And so, you know, for but but often would be an annual process of after station. And we've essentially built that into into, the cut over platform that any time a response is templated for application or service, that there, there can be a kind of review cycle associated with that that gets automatically triggered with the with the expect expectation that that gets signed off. And we see kind of that as, you know, very much not being you know, read a document and click a button, but give we give visibility of, you know, has that template actually been executed? And therefore, have you got good reason to have confidence that, that that's been executed in it, and it performed as expected last time, and it was adjusted accordingly. So yeah, that was a, you know, reasonably early request from customers and something we we bought in a a year or so ago. Yep. Great. Thanks. Maybe Karen, maybe you can take this one. So, question came in you know, I'll kind of paraphrase a a little bit. But do you see customers failing over from their on premises, their their data centers to a cloud provider and And one of the essence of the the questions that I'm taking out is how can an automated runbook help in that that failover model? Absolutely. I think that's one of the advantages of the cloud in that public clouds tend to be elastic. You're only paying for what you use. So there can be a a great way of having that thing that can be there in the event of a black swan event. So various services out there that can help you do this as well. So the likes of Zerto, cloud endure, Amazon disaster recovery services, and there isn't multitude of others that can give you that sort of bit level replication, of your on prem data center so that you have got something off-site and distributed away from your normal pieces that can be hydrated in the event of a disaster. And having an automated runbook to trigger that, it is the sort of thing you can pre can. So that you are getting visibility that you're gonna file over to a effectively another data center to a to a software application. It hopefully doesn't matter too much depending on how you're architected. And then importantly, How do you fail back? So that that's the really important one that might get forgotten about, otherwise, is is that failed back approach and remembering to shut all the resources off, and that's true in a migration as well. It's not only the to getting there and recovering and then everybody breathing out. It's actually to any case of a migration to to realize some of the the cost benefits is making sure you really are cleaning up the behind yourself and then decommissioning services that might otherwise be running. Great. Great. Thanks for for thanks for that. Just one last, question that kinda just recently came in. They keep all their recovery plans in a spreadsheet, you know, how how could they migrate those you know, into a tool such as cutover. How could they migrate those recovery plans, that they have? Is it an easy way? Yeah. I mean, so we we have a sort of CSV import. So we tend to find that, when when we start with a new customer, you kind of go through a process over a over a period of time of of migrating those. And it's usually quite event driven in terms of doing it for a purpose. And therefore, you you can migrate and then test straight away. To validate that they're that they're accurate. But, yeah, we have a have a mechanism for that via, taking something in, in Excel, putting it in CSV, and then uploading into Crossover. Okay. Great. Thanks. So that looks to be the end of the questions. I wanna thank everybody for attending today, and Marcus and Karen. Thank you very much. I I think, you know, presented really a great, outline on on the value of automating your runbooks, automating the technology into the the tech stack, and with that, we'll close out for today. Again, thank you everybody for attending. Bye.