So so just a fifteen, twenty second, background on on Cutover before we we we get into to that. So Cutover provides a number of products that support organizations, ITDR capabilities. The core technology is based on automated runbooks with AI for tech ops teams to enable those teams to leverage far more automation, less burden on the the human making decisions in the moment in these high risk processes. Of products that are applicable are cut over a cover, which many large organizations use to, fail over their applications at scale, to make sure they can, achieve their recovery time, objectives, and kind of a respond, which is a major incident management, solution, which then pulls the, the kind of a recovery capability, when you've sort of diagnosed that particular applications are down, and, recovery is needed. And, we were very grateful that we had significant traction in financial services that many of those world's largest banks, adopt Cordova today, for those products. So quick disclaimer to say, the, views expressed will be those of the individuals, today rather than the the the the firms, represented, in terms of how we'll go through the the questions. So let's get into it with our first question. So our first question for the panelists is how can you ensure your disaster recovery plans keep pace with the dynamic IT environments of your institutions, when, either, testing, needs to take place or there is a real event, and you need to recover your IT applications? And maybe if I come to you first, I'll go on this to express, your opinion. Sure. One of my favorite questions, how do we keep up to date? It really comes down to the data and having that data be accurate and having access to that data so it's not buried deep within some word document and really inaccessible or useful in a crisis time. So, of course, one of the first things we say to keep up with the ever changing pace of technology is to regularly update your systems of record, making sure that you have the full lineage of your critical business service mapped to your assets. And what I mean by assets are the processes that are needed with their respective RTO times, technologies that support them, whether they're internal applications or third party vendor applications. Having that full insight into your mapping will enable you to make sure that your disaster recovery and continuity capabilities are keeping up pace with with your environment and organizational changes. In my current role, this job is made relatively easy by having third party risk management program under my remit, so I can really direct all of the data flow to feed into my business continuity source system. And that way, I have up to date data at all times. I do require semiannual updates on a regular basis for all of our processes and all of our data mappings. But if a material change occurs, it has to be done right then and there and not wait for the six month cycle. I will say that in larger organizations where each team has its own leader, it is a little bit more challenging to get that data to work seamlessly across the organization, but it can be done with good partnership and good technologies. I'll pause here to see if, others want to add. And, Anita, you want to add to that? Absolutely. I think there I mean, where all this said is absolutely right. You know, it's all about your dependencies, what data you care about. What I would add on to it is you also wanna think about disaster recovery from a more of a scenario agnostic approach. Right? So when you are writing your disaster recovery plans, especially in a very dynamic world, you wanna make sure that you don't take too many specific scenarios into consideration. You wanna test based on scenarios which are more likely for your environment. But while writing your plans themselves, think about it more in the likes of, granular unavailability of your assets to the likes of an integrated outage where you have multiple components going out at the same time. It could be for any number of reasons, but taking that into consideration is most important. The second point I would like to emphasize on is, especially these days with the real time, you can't really predict what the scenario is gonna be. It can be anything crazy. It's all the more important for you to have a very good crisis escalation communication plan in place. This means how are you communicating bottom up, top down, but also to the likes of, communication to your third parties, from your third parties across the industry, across, other parties of interest. It's making sure that you have all of that taken into consideration, and you have that all well documented as a part of your plan. I think that's gonna be more essential going forward. Right. And and and what I would add to that as well so so I entirely agree with what, with Olga and with, Anita and what they're saying as far as data and metrics and making sure they have these plans well documented. Right? I usually try to bifurcate what I'm doing in this space between conformance and capability. And and what I mean by that is conformance is that you you have plans, you have the documentation, you know what's there, and, you know, you're staying up to date with the latest metrics. When it comes to capability testing, it's much more in line with, as Anita mentioned, some of the scenario designs, scenario development, and taking the reasonable worst case scenario. And so ultimately, even though you have some of these plans, some of the environments well documented, people don't always understand, you know, if you do a a logical isolation of the network or if there's, you know, other things that come into play, you are establishing these tests from an IT perspective in a way that would impact, you know, the most the greatest number of critical business services to be tested or, you know, different areas of the firm to be tested. And so conformance is obviously extremely important in making sure that's up to date. But I'm much, I'm a big fan, I'm a big proponent of testing and specifically Doctor testing in the IT environment to make sure that you can evidence the capability of recovery, you know, beyond just the conformance aspect of things. And I think that, you know, whether it be IT testing or call it more operational testing and a lot of the regulations specifically, you know, coming from PRA, FCA demonstrate, you know, that you need to evidence the decisioning and what happens in some of these cases where it's not what I would call a no brainer type of a scenario where you need to fail things over and it's availability. You need to go from one data center to the next. But you also need to understand if you're not seeing, you know, like I said, that no brainer aspect of, we need to fail everything over. If there needs to be a decision, then you have that documented as as well. So the environment is is evolving just by way of the evidence that you need to to, demonstrate to some of the regulations to the regulators based on the regulations. But I do think that, like I said, in the vein of capability testing, being able to go back and evidence that you can actually achieve what you say you can achieve as part of the conformance, as part of the metrics, as part of what you've documented, as part of, you know, your runbooks, playbooks, etcetera, through this, through this paradigm. So, you know, for me, a lot of this is almost working backwards and saying, okay. Well, we, you know, have these scenarios developed. You know, we know full well that we get an encyclopedia of scenarios. How do we evidence those things? And a lot of regulators will specifically say that, look, we want you to set up a, you know, something in sort of UAT environment or even its own separate environment to run some some of these tests and make sure that you can evidence that, you know, even though you might have conformance documentation that, you know, is novels and novels long, that when something does actually happen that you can actually achieve what you say you can achieve. And you have the capability of demonstrating your recovery in that space. And so, you know, a lot of this is is a bit of a trust but verify type model that, you know, through testing and through adequate testing given the evolving risk landscape that you can, you know, evolve with the the threat matrix that we're dealing with these days and and make sure that you can evidence that you can achieve what's within your plans. Great points. I've taken notes as well through it. I think, some some really good additions to that question. Thank you all, panelists, very much. What we're gonna do now is we will, gonna do a a poll question, with, our audience. So you should see the the poll has has popped up now. And, essentially, what that poll is is about how confident are you that your ITDR plans can support you in a live incident. So I think from some of the points of the the panelists there about, making sure that capability is, demonstrable and not just the conformance that you actually can do the recovery and your IT, Doctor investment, is, leverageable in a in a major incident itself. And the options here are, very confident, confident, not sure, not confident, or not confident at all. So just reading a question again for, the audience. How confident are you that your ITDR plans, can support you during a live incident? Are you confident? Very confident rather. Confident, not sure, not confident, or not confident at all. If you maybe just, I'm looking at the poll there. That's a good that's a good amount of confidence, for for the, live, capabilities, that we that we have there. So, thanks audience for sharing your insights there. So panel question number two. What are the things that you need to get right on top of keeping the plans up to date in order that your ITDR investments can be used in real life major incidents rather than just the scenario tests we talked about? So, a little bit of the, the poll question, and getting the insights from, the audience here. And I wonder, in this case, Anita, could could you start us off? Sure. Here you know, I'm making I'm gonna make the assumption that most people know your framework. Right? So you know exactly what you need to do in order to get your Doctor investment accurate in the likes of risk assessments, making sure you know your business impact analysis, making sure you have a detailed work around strategies, making sure you're doing your testing, and so forth. The thing I'm gonna focus on is most likely things you will not think about, but, you know, this is also your opportunity to be prepared. The first one I would say is make sure you know your stakeholders very well. Understand your business very well. Make sure you have a really good network. So you can't predict what events could happen. And under those circumstances, you wanna make sure that you are able to reach out to your stakeholders immediately, and your stakeholders really value your thought process or they know exactly, that you're giving them the best advice and the best options. The same with network. It's important that you have network, not just within the organization, but also outside the organization just in case of widespread outages. And keeping in mind that depending on your industry, you could have an impact on market integrity and so forth. So it's important for you to have that knowledge, capability of, exchange and so forth. So, yes, I would say those two are essential. The third one is setting the right expectations with your third parties. Right? Making sure that your third parties understand where you are with your Doctor expectations and, making sure that, you're holding them to the same level of expectations and, setting the boundaries right in terms of how you want the communication to happen and what would you do during a time of crises and so forth. I would say those are the some some soft skills which don't typically get called out, but that's where you should be investing from an ITDR as well. Thanks, Nida. That's great. And then, Spruill, you wanna share your points on leveraging ITDR for real events? Yeah. So so and to elaborate, I think I think on what Anita is saying as well is that the subject matter expertise is key. Understanding your business, understanding what those dependencies are, and and really being able to challenge your stakeholders. I I I think in the vein of, call it, traditional BC, you know, going back to pre pre twenty eighteen, people might say, oh, I need this order management system up and running within two hours. Right? And then so that's how you would facilitate trading. But then also understanding what those dependencies are that, okay, well, you could still facilitate trading. However, in order to facilitate trading, you still need to understand your positions. Right? Understand, you know, what your exposure is. And so a lot of folks will have a very tight RTO value on something like an OMS. However, they might have a more relaxed RTO value on the on the underlying, say, Oracle system that captures your your trade blotter. Being able to go in and challenge your business. Again, I I think up until, recently, it's a bit of a necessary evil, so to speak, when it comes to business continuity. People wanna check the box and say that, oh, well, I need this system back up and running. But being able to effectively challenge your teams and your businesses to be able to say, that's fine. You need this system up and running. However, what are the dependencies that you need to be able to continue business? And, you know, in the vein of operational resilience and what's ultimately delivered as a service to your clients, what's the front to back flow and how does that work and and understanding, you know, more of the big picture. And so, you know, unfortunately, in the past, I think that BC has been seen as as a jack of all trades a mile wide inch deep. But in the vein of operational reasonings, it's a mile wide, a mile deep. Right? And understanding what those dependencies are, say within the firm and and similar to what Nita said, you know, reliance on third parties is key to develop some sort of integrated testing model to say, you know, do you do does everyone have the same expectations? And evidencing this through, you know, real hands on testing is critical as well. So, you know, when it comes down to the testing aspect and and really the conformance and the data and the metrics and things that you need to consider, you know, being able to go in there and effectively challenge your stakeholders. Again, usually, it's the, you know, smartest people in the room that you that you're dealing with. But asking those critical questions around, you know, okay. This is fine for the system, but what are the dependencies? What are the things that you need to consider? What are the areas that, you know, you need to be able to facilitate business? Obviously, during a disruptive event, you're trying to minimize impact and being able to understand, you know, if something is going to impact your business, what are the other able the other resources that enable the delivery of that service is key as well. So, you know, again, it's much more of an art than a science in some cases. But, you know, having that SME knowledge and be able to come in and effectively challenge those folks who do this on a day to day is critical as well. And so IT, you know, investments, what's happening by way of resilience has has to be all encompassing. That if you have one key area, that that's fine. However, take into account the entire environment to be able to make these decisions to enable the delivery of that service is is critical as well. So, you know, big picture items when it comes to Doctor that, okay, this is one component of your resilience landscape. However, making sure that you take the entire front to back flows into account is very important. That's great. And I think, Olga, it'd be great to get your perspective as well. Absolutely. And completely agree with Amita on the clear roles and responsibilities and with on that subject matter expertise and really understanding those dependencies. I would add, you know, regular and realistic testing is probably one of those elements of your program that you definitely want to get right. It is not only crucial for building muscle memory. It also really shows you how capable you are to recover. And when I'm talking about testing, I I try to stay away from the rinse, repeat exercises that we do every year. There's a script. The IT fails us over. We run certain jobs out of the safe environment, and we get that nice check on the box that we have done our testing. When I'm talking about testing, I am talking about real world scenarios and narrative that is upfront design in a way that will force participants to make decisions and problem solve, not your traditional just run the script and you're done. So that does tie into the clear role and responsibilities and does tie in into your subject matter expertise. You want to make sure that your tests include those human resource partners and marketing partners and your operational teams so that they can experience what it would be like to operate from that recovery location or your, recovery, cloud environment and understanding how different systems, if they are in the recovery phase, how they interact with third party dependencies. Because that's the piece that is often missed in our Doctor programs. We don't do enough join testing with our vendors. And what I mean by that, if we fail over, can a vendor fail over, and can we still talk to each other? Can we still connect and make sure that our process is seamless and disruption to the customer is minimized or nonexistent? Those are the type of elements that take some preplanning and kind of thought ahead of the time. So then the testing becomes not a routine. Let's just run through our list of monthly, semiannual, annual test. It becomes, okay, we're gonna design this narrative because we've seen it happen or it happened to another bank or another country. And we're going to design the operational components we're testing, and we're going to design the technical components that work together. It is this new way of testing that is incredibly powerful because it brings everybody around the table and really emphasizes on all of the points that my fellow panelists just mentioned. So I would add that. And and and the one thing Olga to to elaborate a little bit as well, one one thing I was thinking of is that we just went through a scenario where, you know, the IT function automated and and kinda had a bit more, you know, process, you know, that that the process is they were automated when it comes to change management. And and so having that close partnership with IT partnership with IT to understand what they're doing in in in this space, what happens when it comes to up updating systems and when those changes go in, that actually led to a disruption that we had, again, rubber meeting the road on Monday evening, where, you know, again, you don't have the same four eyes checked because they've automated certain things when it comes to scheduling. But that should be transparent to the, ops res into the BC space around, you know, where some of these things might be automated. Because ultimately, what led to this disruption is the automation of the IT tasks that come to change management and the scheduling. And some of these areas, you know, you need to be on heightened awareness. Right? And and and so, ultimately, having that close tie into IT to understand, okay, where there might be, you know, not as many people that are, hands on or kinda reviewing these things where they try to automate, which which is a great thing. Don't get me wrong. But, you know, knowing full well that you need to be on heightened awareness when something like that does happen, you know, because we did experience a disruption based on some, you know, technological, advancements they've had. But, you know, obviously, it leads to unintended consequences if things are not transparent with your larger audience. Yeah. How many incidents are we all experiencing? And the explanation from our third party is like, oh, sorry. We had a system upgrade, or, oh, we implemented this change to our process and unintended consequences. Down down the stream, three third parties removed. You're feeling it in your organization. Absolutely. Yeah. And I I, I do, can respect that that things don't sort of break on their own. They're being broken by change, and change is happening at a at a faster pace. But then, like you say, how do you sort of make sure that's appropriately linked into how you're considering operational resilience and not just sort of, accepting it as it's, yeah, I think eighty five percent plus the cause of most outages, as you say. Okay. So some great insights there. And I think, Spruill, you're almost prescient in terms of where we're gonna go next in terms of thinking about automation. So so so maybe with some of that detail that Spruill, talked about in mind, where do we think that AI in automation can help, in terms of how we do IT, EPR, enable us to cover recover faster, in real major incidents. And some of that is maybe about how we, link into automation of other components like change, as Will mentioned. And I don't know whether, Olga, you wanna kick us off on this one. Sure. Sure. Thank you. So a couple of things on automation. I guess first and foremost, any new technology, and AI is no different, introduces risk. So how do you manage risk? Very carefully. Right? That's the joke. So what I've seen at my recent organization is you want to have some kind of oversight over AI. In my current bank, we have an AI board. Whatever you call the committee or structure where you have subject matter experts around the bank to review the use cases for AI use and how we get comfortable with them, it's very important to have that governance chain. Otherwise, it's very difficult to withstand scrutiny from the regulators of how you're placing reliance on certain tools. So that that kinda goes without saying. And then where it can help, there's a number of use cases. For example, early detection of incidents. Right? A lot of providers can help with monitoring your third parties, monitoring your internal system, monitoring your domains, monitoring your vendor domains for vulnerabilities, and give you a heads up of there is a vulnerability that's unresolved. And by addressing that vulnerability, you can avoid an incident to start. And that is where that automation can really can really help. Can we stop all cyberattacks? No. But if we can minimize them, then we can focus on those that are not stopped with more resources and more attention. There's actually studies where companies are using historical data to predict natural disasters and where they're going to hit next. Really, like, cool science behind it, but all of these tools are coming to the market. And all of these capabilities will help us be more prepared and identify problems before they arise. But relying on those tools has to be done very carefully with a lot of understanding and subject matter expertise of the bank that can evaluate those models. Right now, a lot of our model risk folks have not been trained to evaluate variables that are ever learning and changing. And that is kind of the expertise gap that some of our organizations may be facing. So, again, having that governance to make sure that you analyze those tools is going to play a critical role as we embrace this this technology. Pretty good stuff. I would have screwed you or interject to that. Yeah. So so AI is integral. I'm a huge buyer of leveraging the technology. And, again, we have to evolve or or die, so to speak. You know, I hate to be dramatic about these things. But, you know, when you leverage the AI AI components, and, again, having seen the capabilities with systems or or or, functionality within cut over and then the, you know, APIs and the dependencies with a a system, let's say, like fusion, you know, being able to identify the processes that interconnect. And I mentioned before, and a lot of what, Olga and Anita Anita have alluded to as well is the dependencies and the mapping and where those exist to be able to level set with your stakeholders is is key. And so, ultimately, you can't run, you know, the permute the number of permutations required to go through every single scenario type. However, you can get a bit of a heat map or something that would enable you to identify where possible, discrepancies lie or issues may arise as a result of these, dependencies. Right? And so traditional BCM will focus on processes, you know, in the ops res landscape. You're focusing on services. But, you know, AI, from what I've seen in the past and and and being able to just scratch the surface because in full transparency, my firm is at the, call it early stages of of enabling these types of technologies to be able to identify those dependencies. It's always gonna be the the question or, you know, usually be the question around what do you need in order to facilitate, you know, the delivery of a service or of a process. And so, you know, leveraging leveraging the technologies to identify those potential gaps, being able to assess those areas. You know, something that might be a bit of a concern is ultimately, you know, the the paramount of what you're trying to the value that you're providing to your clients and to your internal stakeholders. That at the end of the day, knowing full well that there's these interlinkages and and something that, you know, you you could have ten people running, you know, ten hours a day on trying to examine where these dependencies lie is ultimately, you know, where you think, you know, where the the the breakage will occur. And so, you know, you're not gonna think of every single permutation of what the scenario would enable. However, you know, leveraging AI, you know, looking at these dependencies, knowing what process and or service is, you know, dependent on a different process or service is is paramount. And so, you know, the the AI component helps you look at these in in a a better fashion to be able to be able to identify where some of these risks may lie. And at the end of the day, that's where you provide value to your management team to say, hey. Look. There might be a risk in this area. We should, you know, delve into this further. And so, you know, again, being a huge proponent of what that can do and how you can provide value to your stakeholders is something that I'm trying to, you know, again, put at the forefront to my management committees to understand where those interdependencies lie. And so I I think that's one of the key areas of a value driven model where you can bring to light some of these risks that people might not have thought about in the past. Yeah. I think it's absolutely true. I think as as Anita mentioned earlier, you you couldn't build, enough plans to cover every different scenario. But if you can build an agnostic plan where you can, leverage AI to, pass the data points on the status of the application and dynamically adapt to say network's okay, but storage isn't in this case or whatever. And now I'll recover my application this dynamic manner. I think you're you're absolutely spot on spool with the the the ability to to to get some greater, further steps in this direction. I don't know if Anita, you want to add, your good thoughts to this as well. Yeah. I would just add on to the fact that, you know, I feel like you have of course, you have to use AI with caution. The the other piece you have to keep in mind is you can go crazy with AI. Because in theory, you can use it across the entire life cycle of your resilience process, right from to the process, right from to the points which Olga made where where you start from predictive analytics in terms of figuring out your risk vulnerabilities, trying to fix them, to the likes of well, during a time of crisis, instant management, you could use it for continuous monitoring, looking for anomalies within your day to day BAU. You could also take it a step further in terms of resource allocations during a live event, in terms of how you manage your crisis and so forth. Right? So the reality is you could go really deep within each of these topics and leverage AI. The question you need to answer is what's the most efficient way for you to use AI for your organization? Where are your gaps? Where do you need this first? Understanding that is gonna be very important. Otherwise, you're gonna over automate. You're gonna really get yourself into trouble of having to manage the ripple effect and the risk that comes with each of these processes. Right? Because I would say, first, you wanna take a step back, understand really where you want to use AI within the organization. And then once you understand that, yes, of course, you know, you know, there is certain risk. You know, you have to caution cautionary rules. So you gotta follow all of that to apply. But, again, you could use this in a widespread. So the first thing you would want to do is understand what are all the base you could use AI for resilience management. Think beyond just from a resilience management in terms of events and, trying to get your downtime as low as possible. Think about it also in the lens of how can you secure your systems, how can you think about quantum measures, how can you get yourselves away from those scenarios in the first place. Could use it for all different variety. And I think that's where it's gonna be very interesting to see how people use this. I remember a few years ago, there used to be a concept of resilience by design, which was a huge hit. This was before our resilience. I feel like now we are again getting to that stage where you wanna think about AI in the likes of resilience by design early on. Think about it through the entire life cycle of your applications or your people or your systems and try to apply it. I I mean, it's stuff from that. You're just far from that. So have you have you ever, had teams try AI, like, during crisis or incident management where it listens to everybody in the room and, like, writes down the most pertinent points, and that is supposed to help with, like, instant communication that can be sent out to all of the groups. We we've played around with it. It it was very interesting results, like, what the AI model thought was important versus, like, what was really important. And you can train it over time, of course. But some of the initial runs that we've done with that technology were very funny, like, what it found to be very important. So, again, I completely agree. You can, you know, you can give it all to AI, and and we can all lose our jobs. But you have to you have to be very intentional with how you use it and if the results are what you expect them to be. So there there has to be that partnership with that technology. That's it. Makes a lot of good sense. So, yeah, I'd be maybe we can, I'd love to take your brains off offline of what I thought, was important versus what was really going on at some point. I'm sure that was, I'm sure that was very good. As you say, you'll train it train it over time to get better and better. I think there are some good audience questions coming in that, are deep dives into some of those AI components. But, perhaps we should move, to the next question, which is really about how how do some of these advances that we've talked about in automation, and AI really help to meet the regulatory requirements for operational resilience, and and Doctor, kinda coming back around to that. And I wondered if Anita, you might kick us off on this one. Sure. I think the the major concern that, you know, most regulators, others have is around how can we make sure there is no impact to market integrity, and how could you keep your downtime of your systems and processes as low as possible? And I think this is exactly where all the things that we discussed today can really help us. Right? Right from the likes of identifying your vulnerabilities to the likes of management and communication, all of that. So, certainly, there is an opportunity here. The other piece which we didn't dig deep on is is the complexity of today's world. The likes of where you have multiple data centers, you have multiple geographical locations. You also have this hybrid environment where you have people working from home or working from an office or from from from another space. So it's it's all of that. Getting all of that information and then trying to use your AI to do a good job in resource preparedness and incident preparedness. I think all of that is certainly something which, regulators, others will appreciate, especially if you are proving to say that you are using your ITDR strategy to help minimize the downtime as well as, at the same time, you're improving your communication and information sharing process with organizations such that you are maintaining market integrity. I think those are the key phrases. You wanna make sure that your process, how you leverage automation helps with, with your Doctor planning. So in other words, don't plan in isolation. Make sure that whatever you're planning from an, crisis perspective, it's information that you are sharing across, but it's also you are thinking about it not just from, from yours your risks and your failures internally as an organization, but also how are you not causing an impact, causing a ripple effect across the industry. Good stuff. Thanks, Nia, for that. And Olga, what your thoughts are on this question? So if you read kinda regulatory perspectives and their areas of focus, whether it's from OCC or or the Fed or, you know, our regulators across the pond, a lot of them use words like operational resilience. But in every one of their focus areas is third party dependencies. So I I'd like to focus on that. The third party dependency is a is a significant concentration risk for a lot of our institutions, and it's a significant reliance risk. So we say we have service level agreements within our contracts. Sometimes we do, sometimes we don't. How do we monitor them? Like, is somebody actually dedicated to oversight over those SLA? When things are BAU, nothing is broken. There is there enough attention on those suppliers and those providers? Because when we think about, you know, regulatory compliance, how do they get at operational resiliency topics? It's usually through other topics and exams. It's either through the IT exam or it's through third party exam, and operational resiliency keeps coming up. It's very rarely operational resiliency as exam unless they're doing some kind of an industry benchmark across the banks. So that's where automation can really help. If you have a robust SLA monitoring software or your vendors have those capabilities and you have dedicated resources to oversee them continuously, you can really get a lot of benefit out of that and meet those regulatory requirements. The other piece is, of course, with any regulatory exam, there's a lot of documentation requests upfront and throughout your engagement during the exam. So can AI help with those documentation requests? Can it review your previous submissions and help you prepare packages without the very intensive human element of putting all of those reports every year or how whatever frequency you're audited on? Yes. Absolutely. So that's another element where you can help your Doctor testing results, your third party's testing results put together in a comprehensive package through help, with AI. And then finally, enhanced monitoring and risk management. We've talked about enhanced monitoring, but the more you can automate and the more different types of risks that you monitor for, the more stronger and prudent your risk management practice is. And what I mean by that, it's not only deficiencies within an environment, but also reputational risks for your vendors, for yourself. Sometimes trouble will start brewing before vulnerability is identified. Maybe the company is experiencing some slander. Maybe the company is experiencing financial harm. Those are early indicators of negative noise and negative news that can alert you to pay more attention and get ahead of the problem, or maybe work with your vendor to see if you can help them out. Or if you need to part ways, sometimes it's a hard conversation. But if if you need to stay operational resilient and your partner is not keeping up to your standards, you may have to part ways. So those are my points to, to how to get ahead of those lovely regulatory letters. Thanks, And I would have Spruill you wanted to add. Yeah. So so so to get extremely granular as opposed to being high level, for those who are aware of the, DORA regulations in the in the EU, so Digital Operational Resilience Act, extremely prescriptive, extremely, granular as far as what they wanna see, especially in the incident response, incident report reporting standpoint. You know, this is an area that that's, you know, I I think driving, the bus, so to speak, as far as, you know, in the in the sort of propagation of the east to west mentality. Olga mentioned about regulations across the pond. We you know, again, I I think this is a indicator of of things to come. And so, you know, as we're starting to get into some of these very granular type areas, whether it be in the incident response, silo within DORA or within, the register of information, you know, again, DORA is one of these areas that I I think the spirit is of the regulations are correct. However, you know, again, there's a lot of nuances to work out over time. And so, you know, leveraging technologies, leveraging AI, you know, we did a lot of AI, analysis when it came to the review of individual contracts with third parties. But coming back to what, you know, needs to be reported on during an incident with Dora, you know, if people have not seen that and they they are a glutton for punishment and wanna run through the thousand line items that are within, the that regulation, individual obligations. You know, it is worth having a look at as far as what the data is gonna be required to report to regulators. And, again, in the spirit of trying to be more resilient, I I I do think that things are, you know, on the right track. However, you know, there are some speed bumps along the way. And so as you go through these areas and try to understand what needs to be reported on and, you know, really understanding, you know, whether, again, in the vein of DORA, ICT, which is kind of the, the the technical, standards, obligations as far as data capture, data retention, understanding what needs to be, you know, something that you you have eyes on essentially within those EU regulated entities. You can only imagine that that's gonna propagate, like I said, east to west. And so the advances when it comes to IT and Doctor and kinda what happens in in that space needs to be viewed as something that would likely be more of a global standard. The same way that business continuity was more of a global standard with, FFIEC guidelines going back to the mid, twenty teen time frame. And so, you know, looking at these areas, understanding your environment, understanding the data points that need to be accessed, and and and again, what those dependencies are is is critical. And so regulatory compliance, you know, I I hate to use regulatory obligations as a driver for my program. I do try to say that it's a, more of a good business hygiene aspect as opposed to path to minimal compliance. But, you know, the writing's on the wall with a lot of these things. And I I think that this is only going to, you know, get more and more complex when it comes to data points that you need to generate, especially when it comes to IT, Doctor, and evidencing your capability of recovery, in in the future state. That's great. Go, please, Ed. Yeah. I think that's a really good point. Right? Because I think as organization, depending on how big you are, you're gonna see a lot of these regulations come up with little bit of nuances. You as an organization needs to decide how you want to hand handle those circumstances, and you wanna make sure you're being consistent across both. Right? And you also wanna keep it consistent in a way that you are, you're complimentary to all of those regulations and to how your your vision of your business is. I think it it requires a lot of thought. It requires a lot of thought, but also being on the forefront of it, and also understanding not just your own resilience regulations, but also the other regulations within your industry and trying to see how it all aligns together. And I think Spruill made that point early on. The this industry is is fairly friendly. So talking to colleagues, talking to others who are trying to build programs or mature their programs is invaluable here because we're all trying to solve for the same formula. We're all trying to make sure that our organizations are resilient. There is no secret sauce here. So it's as far as we can help each other, the more resilient the world will be. So this is this is one of those industries where unlike in finance and, like, OCR ratios and all of that, there is a lot of collaboration happening. So taking advantage of these webinars or conferences is is is very valuable. Time well spent. Good stuff. So that, wraps us up for the, panel questions. We've got quite a number of, audience questions that have come through during the event. And as we we, have only a number of minutes left, so I wonder if I could, share some of those with the panel and see if people have, thoughts they wanna share, in terms of a a number of these questions that we got. We'll see how many of them, we we can get through. So the the first one and and it may not generate, any more comments, but I think, because it's in line with the the last question, but there may be some specifics, which was does the does the panel have any comments on US banking regulatory exam focus areas? Any insights there would be deeply appreciated. I don't know if we spoke about Dora there, but I feel in terms of, the US banking reg, exams, sessions that people wanna come up with. So so I I I can speak to this again. I'm coming from Citi and and knowing full well-being in the GSIB world what peer firms are experiencing. And, obviously, in the US, we have one, not regulation, but white paper when it comes to SR twenty twenty four and what the OCC, Fed, and FDIC put together, as far as the guidance note. However, I I do know that there are CFTC regs that are out there these days, as well. But, you know, what I know for a fact is that many large banks are receiving, you know, three letter, you know, letters from, our friends downtown. Or even four letters, MRAs, MRIA's, in this ops res space. Right? And so what we need to look at and kind of what the expectations are. So without having formal regulations in place, firms are already, you know, getting some of these letters as to whether or not they're meeting the spirit of some of the, the the white paper guidance. And then ultimately, whether you look at PRA, FCA standards or the Basel guidelines that are out there as well. So, you know, again, a lot of this is a global landscape. I think that, you know, what what has happened over the years is that, you know, BCM has been, you know, call it a graded paper up until now. You could have a a an a, you could have a b, you could have a c, whatever it is. That's that's been driven more to a call a Boolean pass fail. Do you have a program? Yes or no. But, you know, I think these days, the ops res area of this, you know, the oncompassing, the front to back flow, understanding your client base, understanding how these disruptions, specifically in the IT area, would impact your client base and understanding your client vulnerabilities is key. So, you know, again, writing on the wall with a lot of this stuff, you know, again, trying to sell this management is obviously, difficult to do sometimes without formal regulations in place. But demonstrating the value proposition of good business hygiene is likely the way to go. But we do know full well from a regulatory landscape that we can only anticipate further regulations in the space, especially when it comes to the Main Street versus Wall Street, paradigm that, you know, how do you impact the outside world, whether it be your individual clients or the markets themselves, both by way of integrity and confidence, is gonna be something that is gonna prevail. And SCC came out with their exam priorities for twenty twenty five, and and ops res was at the forefront of their exam priorities. Again, not being very prescriptive there, but you can only imagine that the themes that they'll pull from will likely be from, from from our friends in EMEA. So, you know, again, there's there's a lot to come, I think, in the space moving forward. I think the intentions here are also right. Right? Like, it's it's around the lines of making sure that organizations are doing their best to figure out how they manage their third parties, being insightful of the fact that they could have dependencies as well. I I think it's that one element. The second element is it's also trying to figure out if there are any points of, monopoly vendors or single points of failures across that industry. And if there are such, then how can we, as all organizations together, work towards building a more secure environment? I think there's there's that, you know, that side effect, that piece that you can really get to if you did a great job in managing your third parties because then you are helping identify those points where, you know, they have you have a huge dependency on, and not too many options. And then as an organization or as an industry, we can all start working towards that. I think that's that's something which many of us are conscious of. And, and that's one way to look at it as well when you're looking at third parties. Yeah. And I I think even though the ASR twenty twenty four is, you know, it it's it's kind of a a summary. It's not it wasn't presented as a net new requirement. It was a summary of existing regulations. So when it comes to third parties, there is an interagency guidance from all three of our regulators who pulled together and created the guidance for third parties. And they are quite specific, on the expectations of managing not only the third parties, but also your forced parties. And that's, again, where there are there are platforms and tools that are available. We're we're using some of them where you can monitor your forced parties. Even though you may not have a contractual relationship with them, there is a way to set them up for continuous monitoring, and that may help you be aware of weaknesses within their environment because sometimes it's very difficult to do due diligence, full due diligence on a forced party because you don't have that contractual relationship. And unless your vendor is forced coming and shares their documentation, SOC two, type two reports, and the like, it is very hard to ascertain how how well they manage their data privacy technology and and resiliency capabilities. So that's where the AI, again, can help with that monitoring and demonstrating your compliance with the regulatory requirements that are stated..