Our typical processes that will be run on cut over for the enterprise will be things like application recovery to make sure that they're resilient and services are maintained under various disaster scenarios, application migration, release and very importantly major incident management, which we'll get to you today. And we want to underpin our enterprises to have faster operations with less mistakes and importantly in major incident management that we have a reduced mean time to resolution. So with that said, in draw to cut over, let's sort of double disclaim. So just sharing here that the views being expressed as Saba and Shatresh mentioned, all the views of the individuals here, not the employer organization, so we can have a very fruitful conversation to make sure we're all aware of that. Right, let's get into the panel questions and the content that we've all been waiting for. So the first panel question, we're going to talk first two or three questions are really right at the backdrop of major incident management and then we'll get into the use of AI in that which many see as a very fruitful area of investment and return. But firstly, we have panel question one, which is how have panelists seen the growing complexity of IT environments create a strain on major incidents? And I'll go on all panelists, but if I could perhaps pick your brains first Jack on this. Yeah, thanks. So on average we're seeing data grow by something like twenty percent year on year, right? So what that's doing is organizations often aren't able to sort of project out far enough that they can create the infrastructure and support needed that they can progressively grow this functioning environment. So what often is happening other than with more modern cloud modular designs is that companies are just bolting on sections. So as time goes by, this increased complexity isn't being managed in the efficient way that we would if we were starting from default, build it up slowly sort of thing. So you then compound that with things like growing supply chains. We're having higher expectations from customers than ever on managing data. You know, it's really fragmenting visibility and accountability of that. So when we've got this growing complexity of IT environments, managing incidents becomes even more tricky. I think we can sort of we probably viscerally understand the role AI can play in it because we know that AI is really good at managing data. But then before we even get to the data, how do we manage this governance? The least interesting part of it, but actually one of the most important, right? So oftentimes organizations have outsourced their IT. So now you've got supplier management, how do you run that? And then even your IT might in fact outsource the SOC even further. All these providers, depending on which SOCs you're in, they provide different amounts of data to the customers depending on what systems infrastructure might be shared between customers. So you can't actually see the raw data itself. And then you've got different policies and procedures all on top of that. And it really creates a really complex environment that when you're managing an instant, you're trying to cut through all of that complexity to make decisions. So no matter how good your information is, and let's just assume your information is really good and you understand your systems and you've got asset registers and the like, how do you get through and make good decisions? Okay, so the first thing you could do is AI, right? AI has a fantastic ability to be able to dig through all that data, use a lot of compute power in the background and help you. But it does assume that the quality of data that you feed in is good and it's sort of it is good to start with. And I think we can probably all understand that most organizations don't have the best data. We're all aware when you're on the inside, what vulnerabilities that you have and what difficulties you've got. Okay. So at the end of the day, so my background is an incident management, whether it's IT or cyber otherwise. Okay. So for me, the priority is always making good decisions. In the because of the sheer volume of data and systems and applications that is extremely challenging to do. So we're seeing really from an insurance perspective as well, people are a little bit bored of data breaches. They're really not having the impact on organizations from a financial perspective that we expected. Certainly ten years ago, I think this was about twenty eighteen, it's dipped off. But more recently anyway, it's the business interruption that makes a huge difference. You only have to look at the countless examples. There's dozens in the UK, but I know there's many more across the world that when business interruption, service interruption takes place, that's what hits the business. And those are the complex systems that we're talking about today. Thanks Jack. I couldn't agree more. I think as you say, the data to support making complexity of those increasing number of third parties to deliver that is a yeah. And often the reliance on some really key pieces of kit that underpin a wealth of business processes for the business impact is a a is a tricky tricky thing as these environments get more complicated. Now I wondered if Saba, you wanted to add to that at all. Sure. Thank you, Kai. Some great points there. Me, looking from a business continuity cloud resilience standpoint, I see incident response and resilience as two sides of the same coin. Right? Preparedness is important. So audience today, you'll hear more from a preparedness and resilience and from that angle standpoint because if you're not prepared, then, you know, your response or your mean time to recover would take longer. So from preparedness standpoint, you know, make sure you're always thinking about business continuity. The whole point of incident response, wanna manage, you know, you want to man you want to make sure the, you know, the incident doesn't run for too long and you want to quickly recover that. So before incident occurs, there are a lot of steps that we could take proactively and integrate those measures within the design, within the deployments, and ensure the mean time to recover is reduced. So from that standpoint, some of the best practices you could use is, again, design for failure, implement continuous resiliency testing as a continuity team, collaborate closely with instant responders. I think that would be great partnership there in helping to reduce the meantime to recover. That's great. Sabah has some really good important bits and preparedness is absolutely key in in getting towards that situation where you're almost doing the sort of failover based on each release and update. It gets that it builds that muscle very, very well. And I wondered if I should Trish, you wanted to add to this. Absolutely, Kai. I echo everything that Jack and Saba has said and couldn't agree more. I just wanna add that complexity isn't just technical. It it's sometimes also organization. Right? When you have multiple cloud providers, multiple third parties, and internal teams all touching the same service, the accountability piece sometimes can become really blurred. And that's where the slowness in incident resolution happens. Right? So mapping these dependencies and defining proper accountabilities is clearly as important as the technology piece itself. Right? And to Sabah's point, you know, this is something that needs to be taken care of at the preparedness stage rather than when you are in an actual incident itself, right, to to help improve the the recovery time there or response rate. I think that that's absolutely right. And I think that as as you and Jacob highlighted the sort of complexities of the sort of third parties, different sort of people involved in it, sort of the complexity that that provides over and above code and environments is another layer, absolutely right. And I wonder if Sean, you wanted to add to that as well. Yeah, thanks Guy. I think similar to what everybody else has said, you know, depends on how long you've been doing this. You've definitely seen a change where previously in most everything was located in your own environment. You had very much a lot of visibility control of that. Today, have hybrid clouds, multiple clouds, you've got on prem, off prem. You've got maybe half or more of your solutions are third party SaaS solutions. Being able to have the right visibility and control of those environments and what's happening in those environments and the dependencies between all those environments creates a lot complexity. And complexity creates risk. With that risk is do we understand what's going on? Can we locate how to find what's actually causing our challenge? Do we know who's responsible and accountable for those things when they do happen? And all of those things can impact your ability to understand why things are happening and actually get to resolution and get those back up and ready. A lot of the similar things that most people have been saying today. Thanks very much, Sean, on that very useful insights. And now if we switch and move from that useful question, and I really appreciate those insights into question two, which is really around how are the core challenges that panelists have seen the industry face when an incident occurs moving on from that preparedness point that was well made and how does that really affect overall MTTR? And just continuing on, Shyuan, I wondered if you wanted to start on this one. Yeah. Sure. So really kind of just more a continuation really of that last question, because as you've got this kind of lack of visibility sometimes and also a little bit what Chisher says, is lack of accountability. And so oftentimes you don't necessarily know you don't have the visibility to say, yes, this is this is my system. This is my third party vendor system. And so who should be responsible if it's that system? Do you know which systems are owned by which person? These things are definitely impacted. Sometimes you don't even have visibility or don't have control. And in our previous environments where we had all of the infrastructure running in our own location, we could could we could control what changes were happening. We knew what changes were happening. And then we knew and hopefully we knew who was responsible for those things. Today you have third party vendors, you have supply chain. So it could be that you have a third party vendor. They have their own changes they're making in their environment. You you have a dependency on your stack and your services on that. They make a change. It impacts you. Wasn't a change that you had planned. Wasn't a change that you're tracking, but nonetheless, it still has a very real impact on your products and services that depend on it. And so you have to be aware not only of your own environment, but all your dependencies, what they're doing and how that impacts you. And then do you know who to go to when those things happen? And so that's a lot of the challenges that these kind of complexity that we have now in our environment has created to to be able to get to that that response and say, do we know what vendor this is causing it? Do we know what changes occurred that we can hold that vendor responsible? Because they may say, well, we're fine here. But you're like, no, it does look like you just did an update. We've seen this in the past with, you know, for instance, you know, going back to some security vendor who put out a new update and all of sudden all of your laptops won't boot up and like, well, I didn't change anything in my environment, but it was just an update that got pushed by a vendor to your systems. And now it's actually having a very real impact on all of your environment and you're trying to figure out what happened. So those are some of the challenges that come up because of these complexities. I think you're absolutely right. I think that there's an angle of the organization of really pushing for speed, agility and rapid releases and deployments across vendors and your own. And then the same time the sort of testing of how that all combines and the framework you rightly highlighted in the show and like visibility, insight and control, which I think is one I come back to quite a number of times is super important. And I wonder if Jack, you wanted to add to that. Yeah, I think the core challenge actually is almost shared by IT as well in so far as I think culturally we don't recognize that having an adversary is really challenging psychologically. That human element we always sort of neglect it. I know we talk about it in the sense of wanting to pay more attention, but one of the biggest challenges I think the industry's face is that people aren't really prepared for an adversary to be pushing back and to be actively destroying and dismantling what you've been trying to accomplish. The mental burden that that puts on people and the expectations is really quite great. I think we've done some fantastic work building IT systems and building processes and teams and governance. And AI, I hope, can take a lot of the decision making away from the teams so that they're able to focus on fewer better decisions as opposed to so many little manual decisions. So the more you can integrate, I suspect, good data into AI, the fewer decisions your people can make and therefore the longer the more longevity they'll have in terms of being able to respond to incidents over and over again. I think the effect it has on meantime to respond is pretty massive, right? The more systems that you can outsource to another decision making entity, the better and longer your decision making will be able to take place. So I think it's a real opportunity, but it does need careful management that I think most people haven't looked into yet. I think you're right. I think the idea that it could be an actual adversary versus sort of an IT change that's passive and doesn't do anything, an adversary that you make a move, then they make a move is a very different world. And I think it can exhaust decision making and incident managers is a different world in the cyber response there. But sorry, I think I maybe spoke over you adding to that. Yeah. No problem. Just I wanted to add a few comments. Thanks, Kai. Shanda has a very good point about vendors and SaaS applications and how vendors push the changes that will impact us. So I think that recently happened and it was a huge year event. A lot of people were impacted in due to the Microsoft Windows screen. That's great. So one best practice or one thing we could learn from that and implement within our organizations to protect our organizations is have that change manage change management process and understand those dependencies and ensure that any change that the vendor is going to upgrade their systems or whatever, make sure you know about it, make sure before it integrates into your system in production, you run the test in lower environment and make sure your, you know, other functionalities are not impacted, and then let them apply to your production environment. That's one way to be prepared and protect your organizations and not be surprised and not run into an incident and, you know, end up scrambling to figure out what it is and and take longer than expected to recover. Tseva, very useful. I think these insights are incredibly valuable. And I wonder if Shytresh, you wanted to add. Absolutely. I completely resonate with Jack's thoughts here. The challenge that I see most often is usually around decision latency. We all have the right data, but it's stuck in silos. And oftentimes, you know, a lot of time is spent just finding the right SME to talk to rather than acting on that particular incident. And the more we can centralize our intelligence and reduce these manual handoffs, and that's where the role of AI is is critical, the faster we can improve our MPPR. I see. Right. I think, yeah, the the the the taking the knowledge out from the what's often very hard to get into knowledge articles and keep them updated and things like that. I think maybe in some of the latter questions around the as we dig further into AI, can bring some of those out, Shytrash, even more. Completely agree on those bottlenecks. Okay, so let's move on from question two to question three. So question three is what best practices could an organization implement for managing an incident when they are just starting out or revising their processes? How can they learn from past incidents and not constantly starting from scratch? So each incident isn't Groundhog Day every time. And I wondered Sabah if you wanted to start us off on this one. Yes, Kai, thank you so much. Very important question. And again, from business continuity and cloud resiliency angle, I think there are some preparedness steps that we could take, and we could ensure that these same same incidents do not occur in production. Like I was mentioning before, you know, building that design for continuity right at the from day one. Traditional approach is always, like, build a feature, deploy it, and, you know, run it and then incident occurs. And then you're learning what incident is, you're learning what the root cause is on the and then resolving, which will increase a lot of aim to recover. So design it for failure, design it from day one for continuity purposes. Make sure in every design reviews, you're asking certain important questions like, hey, you know, if you are in public cloud and you're using multi tenant architecture, for example, or multi account strategy and you have multiple applications deployed there and you have this critical application that's going in there also. So make sure you understand those quotas or if there's limits, you know, make sure you don't run into those resource contention issues. And and ask in your design reviews, ask the questions like, what's your quota? Understand the infrastructure, understand your fault domains, and make sure the infrastructure is highly available and prepared for disaster recovery. That's one thing you could do in design phase. And then once the design is approved, you go double up. Make sure along with all your functional testing, you're also performing resiliency testing in a prod like, non prod environment. And make sure you are I mean, you know, this is where, again, we can collaborate. Incident response team and teams can collaborate and, simulate some realistic failures, in the non prod environment and, you know, have these incident responders respond to them as if it's a real event and see if they're seeing those alerts on their dashboard. You know, if if you had some circuit breaker strategies, are those working as expected? So then you'll identify a lot of things. Then that's all good. You'll know you're it's it's failure sorry. It's simulating realistic failure and running some sort of game day and, learning from the failures and fixing them in the non prod. So those kind of same incidents do not occur in production. Those are some best practices. Would say perform this continuously at in every change for your critical applications. And think your second part of the question was how do we learn from past incident? Again, other incident respond experts can, add to this, but I think building a sort of library of, all the different incidents and, building that pattern and ensuring we have that knowledge, will help us to quickly recover should this incident occur. Thanks, Saban. Very, useful. So I think that I completely agree. Was with one of our major customers just earlier in the week at one of their tech summits and they were telling us about what you hinted at there, which was in a major incident or even lower level incidents, how teams can get that sort of dashboard fatigue of check the logs, check the dashboard, check that that sort of ops tool and having to come up to speed very quickly on that data set etcetera is it can be very hard. So I think the the preparation around those sorts of things in commonality can be very important and move from just check the dashboard to something that might automatically act is is very important. And I I think you also will come out onto it later and some of those other questions of to learn from the incident you need to lay down the the good data and sort of training tokens if you will for the sort of humans and our AI agents to, learn to get better next time. You certainly need to make sure it's not just the five superficial points are captured afterwards. The the real data trail is captured. So, yeah, we know how we really tackle the incidents so we get better next time, and it's not Groundhog Day. But if I maybe move on and ask Shatresh if you wanted to add to that. Absolutely, Kyle. You know, I would also reiterate and and I can't emphasize enough the value of structured post incident reviews. That's something that we've seen work wonders because it's easy to focus only on the fix, but capturing and institutionalizing the lessons learned is what prevents us from starting from scratch each time. And the key question and that's that's why I focus on the word structured is embedding these learnings back into playbooks and automation because that's where you create a real feedback loop and make those incremental improvements with each incident and and, you know, just institutionalize the learnings and and codify them in your frameworks. With that, Jyushchen, I wonder if Sean, you want to add. Yeah. I think some great things have been said here. I There's some basic principles here. Some could be applied to how how people with in professional sports work. Right? Have a plan. Work the plan. Practice the plan so that the next time that happens, you're prepared. Do you know who do you have an incident response plan? Do you have playbooks? Do you know who's responsible and accountable for what types of things so that when that happens, you're not trying to figure it out on the fly? And every time you're repeating that process again, trying to figure out, all right, what do we do here? So it's, you know, a lot of times, some becomes you may you may not have a plan, you may not have updated playbooks. You know, obviously those playbooks are meant to help ensure that you've you've learned from your last one, you got playbook for the next one. And you should have access to that with people who know how to how to run those appropriately, who's going to run those those playbooks. And then you just practice, especially with, you know, maybe not for minor incidents, but for major incidents, you need to practice those and make sure that people, when it really counts, they know exactly what they're supposed to do and who's supposed to do it. I agree, Sean. I think that sort of mobilization of getting the sort of who who's responsible for different components involved in the right way and that they're that they're loaded with the right data is super important to affect overall MTTR. And I wonder, Jack, if you had some thoughts to bring here. Just I think adding on that really, as uncomfortable as it makes people, I always think simple plans are best. So, you know, larger organizations increasingly wanna have more complicated plans, and they think that that means that they will be well prepared because they have a good plan. But as I think really, as soon as you start having more than a few plans or specific scenarios, you will just lose focus. And you won't be training people on how to respond to incidents. You'll be training people to follow the playbooks. So, absolutely have a plan, but my preference is always to have a simpler plan and high quality people than having high quality plans, but then ill prepared people. I think the the sort of the false summit of saying you could sort of paper every scenario with a plan is a is horrendous way to go with like eight hundred thousand different Doctor scenarios. So how do you get that sort of simplicity, but with the ability to execute with a little bit of variability is exactly right. So that was question three. So keen to move on and share more of these very useful points from the panelists. So question four is how do you think AI and automation can help improve in managing and resolving tech incidents? And where are you on your AI and automation journey? So we're sort of starting the more AI themed questions of this. We've hinted from around them earlier. So kicking off on this, could you start us off, Sean? Yeah, certainly. Thanks. Thanks, Guy. This is actually something that we are just early in the process of kind of applying some of the benefits of GenAI. We've done a lot of experimentation, some implementations of GenAI, and this is an area, you know, security incident response where we're seeing a really good value and potential here. Some of the areas that that we're looking at on how we can apply this to this as an example, In most environments, have maybe some kind of a SIM that's aggregating logs and events from a lot of different things that are happening in your environment. Well, that's great. But that's actually one point. And because you can't actually get logs usually a lot of times from your third party vendors, so that still limits your visibility there. In addition to that, as we talked about from a supply chain management perspective, sometimes changes are happening that you don't have logs for. You don't have visibility into those things. It's not part of your change control. And so it might have been just an email that you get from your vendor saying, hey, we just pushed up a new upgrade or new new version. And that's not actually being tracked by anyone because you're getting tons of emails from multiple vendors that you have. And so nobody's really tracking those things. So if you think about it, if you can get you can get an agent that is collecting information about, hey, emails updates from multiple vendors, hey, we just push out this version or this update. You got your change management log. You got your logs from your monitoring solutions and and other types of tools that you use to monitor and track your event. You got your KBs. And so when it happens is you say, hey, what's changed or what's going on here? You can actually say, actually, this vendor just pushed an update and you don't have enough way to aggregate these things and make sure you understand what's going on. And so having something that can actually answer questions like what's changed, you know, ask it a simple question. What vendors have pushed updates? What happened? Did someone make a production change last night? And having that be able to understand potentially having that agent be able to join an instant response call as a as a responder that has knowledge of your playbooks, has knowledge of your change management logs that you just had, has knowledge of your your logs, your third party vendor updates. So when you're trying to go through this and say, what are the possible reasons for those going on? You simply ask it a question and it's something that can answer those things quickly or say, who's responsible for this? Who owns this solution? And you might have that document somewhere, might be in a playbook, be in a change management log, but having access to that information very quickly. And then lastly, I would say is you have a plan. And like like has been said by a few of my my fellow presenters here is is rather than having to have somebody remember, oh, we got to tell this person, we got to communicate this. Why don't we go run this analysis? If you can have the AI actually automate a lot of those things, reduce the amount of things that you have to think about and do while you're trying to actually get to the root cause and actually resolve it, then that's going to actually reduce the overload, the error as well as the time to to understand and get resolution on these things. No, that's absolutely right. I think that it's it's bewildering as we've said out earlier that the bewildering nature of complexity here, that genetic visibility here you're talking about, Sean, makes a total set of sense. I think also you highlight a very good point to me, which is think sometimes a sort of a twin sets of where people drive on this. I think it can be that you aim for sort of automating the administration of the process of a major incident management for the MIM, but I think enabling the responders is also a very vital component of how you build a great capability here to reduce MTTR. So I think that you highlight some good points there. I wonder Jack, what would you wish to add to that? Yeah, I think I imagine that in practice AI is going to have a real challenge because it's not just raising the bar for the response teams, it's as much raising the bar for the threat actors wherein they're engaging. So at the moment, I'm really visualizing it as more sort of like the new technology is like the seam or the sock or the saw or the EDR. It's the it's the next layer. It's ultimately, I suspect when actually in principle simplify management because it's just raising the level of how complex the environment is and that you'll have to keep up, right? You know, it's like it's if everyone's moved onto a car, you can't stay with your horse anymore because it just simply isn't comparable to the rest of the environment. And so AI faces this, it's almost like a new baseline. I think organizations cannot not use it, whether you implement it yourself or whether you go to a software and MSSP who's going to provide for you. It's going to be critical regardless. So I don't think people should think of it as this optional whether I wish to engage or not. It's like having a computer, you have to use it. We're much like every other organization, nascent in our use of AI. There's probably a mix mode of AI use cases and things like that across the organization. But from my perspective, I use it to gather as much data as possible to inform my decisions. I don't use it yet to make any of the decisions for me. And it gives me almost like a sense of assurance. I can check my own thinking. So I have an opinion about an incident and how we manage it. And then I follow-up with what data am I able to glean. And I use it as an analyst next to me to support my decision making. I'm hesitant yet to have it overtake my decision making until in fact it makes better decisions than me and then I'll gatekeep it so I don't get fired. So we'll see how it works. Yeah. Yeah. We don't know, but should be a a very interesting space that, again, I really like your analogy there because I think that just as we talked about earlier, the sort of burden of a sort of actual threat there that's over over and above somebody that made a an infrastructure change that caused a major incident when it is a there is a threat act there on the other side and it's kinda moving, and and then it steps up again to be a threat act there with agents operating, and it moves at that sort of car speed versus the horse speed, I think you step up internally or you get mismatched. So there is a sort of race to get that done. That's a very good way we're looking at, I think, Jack. And I wondered, Shytresh, if you wanted to add some thoughts. I would, again, agree with what everyone else has said here. You know, really think the real power of AI in incident management is about scale. Humans can only try and share only so many alerts, but AI can cluster, like, thousands of data points and reach them with appropriate context and surface the most likely root causes in seconds. That's where you can drastically reduce the time based on false positives. And I would say that automation is equally critical. We've seen huge gains in automating diagnostic, like pulling logs and service dependencies the moment an incident is created. And that gives responders, you know, a likely fifty minute a fifteen minute head start before they even join a bridge. But again, you know, I'll I'll caution that we are in very early but exciting stages, piloting AI assisted knowledge retrieval and automation for enrichment. But the human element is is always there. You know, that given in a in a tight regulatory environment, the focus is on safe adoption that reduces MTTR, but without creating new risks. That that's that's where the regulators focus the most. Yeah. I think that that is the the tricky one of the capability that the risk component as well. But I think you also raise a good point in AI scale there. Think as as per Jack's earlier comment on simple plans are important. I think you can have simple plan where still you might kick off two hundred very cheap and safe automated activities that may engage AI agents or automation to do things at the start of an incident that really inform the responders and or have some chance at a self heal and the scalability to do that. You just couldn't get enough sort of fingers on keyboards to do anything similar in the past so that that there is some potential there, I think certainly. And I think, Sabahat, I wondered if you had some thoughts in mind with that. Sure, Kai. Thank you. I concur with everybody's thoughts, but I wanna compliment Jack's thoughts or Sean's thoughts before where he was talking about integrating, all the teams, and ensuring, you know, business units or incident responders have all the information to respond to the incident, meaning understand the dependencies, understand all the logs that the application is generating, etcetera. So integrating business continuity for team, IT, and business units is important because continuity teams are working closely with business units in developing the plans and ensuring those plans are up to date. Basically, it's a living document. And then as they're working with these IT teams, then we are also on working with them to ensure from technology resiliency standpoint to generate right metrics. So all that data could be further used by other tools that have AI already integrated in them to detect anomalies, to detect configuration mismatches between two availability zones or data centers or between two regions and quickly identify, you know that with the tool will help to quickly identify configuration issues. And also, we have tools that, you know, identify new infrastructure components in there and quickly help us prepare a plan to recover that component should there be a failure on that component for different reasons. So using those those sort of tools and, again, preparing, being prepared and continuously testing will avoid incidents and even if incident occurs, they will will be better prepared to handle that incident and reduce mean time to recovery. Thanks, Seva. Fantastic points. So let's get on to question five, really enjoying the insights being shared. So question five is what factors or challenges have to be taken into consideration when implementing some of the things, the AI components that we're talking about in this space. And on question five, I wonder if I could start with you, Jack. Yes. I really think we think about this in three ways, right? So we've got the quality of the data, which is sort of a technical piece. We've got contextual relevance, which is your business. How does the business look at it? And then sort of human trust, which is probably a big stumbling block, and that's your your human element. So just to start on data quality, obviously, your AI systems that it's probably gonna be trained on sort of generic threat data. And so there's a big risk there that it might misclassify or overlook sort of your unique enterprise, threat picture. Probably organizations like to think they have a bit more of a unique profile than they they do in practice, but the the primary issue is that the IT infrastructure almost definitely looks completely unique to your organization, the vulnerabilities it manages. So your AI, if we're going to implement it, is going to need to know exactly what your IT looks like, where your vulnerabilities are so that it can accurately reflect that. Fairly standard, I think, and straightforward. You know, the high quality data you have, the better decisions you can make. I think one thing people wanna consider though is that remember that AI doesn't actually need to make good decisions. Right? So don't think about the minimum standard being good quality data for good decisions. It simply needs to make less bad decisions than the people already make. And people have a proclivity therefore to assume that they make good decisions and thus AI needs to make excellent decisions. That isn't the case. It just needs to make less bad decisions. So, difficult for people to implement and and monitor, but I think actually quite important because when you've managed enough incidents, you soon recognize people under pressure, their decision making capability slips. The second item then so that sort of contextual relevance, your business, output and certainly what the business does and where the risks are. You know, anyone who's done the CISM certified information security managers course fully recognizes that the business focused business first thing, is is as true for AI as it is for anything else. Right? So understanding that earlier point, what your business does and what systems matter. You know, your AI isn't just resolving technical issues. It is in fact resolving how can I restore business services? How can I get the business to keep functioning? That's the red line, not your IT failing. Your IT could fail, but your business keeps running. And, actually, that's that is an acceptable solution for many people. Finally, and I think the the biggest challenge is that is that human trust. Right? So, when you're trying to implement AI, making sure that your people are on board. You know, I don't think the real challenge is technical. It's fairly easy to bolt some AI onto your system, but it's that cultural thing. How do you implement AI and how do you use it? People still think of it as just like a good Google. They don't think of it as a team member who you need to train and build up. And if you don't manage them properly, they can cause those damage. But if you look after them and nurture them, they will work very well for you. But there's a lot of underpinning infrastructure under that. So I think of AI as like a knife. Right? Like, it needs to be aimed and wielded properly. Otherwise, you just slip and cut off your finger, and that's still a bad outcome. Doesn't matter how good your AI is if we're using it in the wrong way sort of things. So, those are the challenges for my view. That works, Jack, in a in a good analogy in terms of, how to keep safer, in that area. And I wondered if, Sabah, you had some thoughts throughout that. No. I can't get those thoughts, Kai. No new thoughts on this question. No worries at all. And, Sean, if you wanted to add. Yeah. I I just gonna echo some of the things that Jack said and and add maybe. Whenever you think about AI and the challenges that people have, whether they're successful or not, and as it relates to in this particular use case or topic that we have today, you always have to start with governance. Do you have the right controls in place to safely use it? Do people know how it can be used? Do you have the processes to to to back that up? And then do you have the right skills? Do people understand how these things work? Have they they have they configured it properly? And then do you have the right data? You know, we often use this phrase garbage in garbage out. You know, it's only as good as the data and whether you're using generic public data or whether you're using your internal data. If your internal data doesn't have the right playbooks, right processes, the right procedures, the playbooks, KBs, then you're not going to get good outputs. You don't know how to configure that so that it's not doing hallucinations, you're not going to get good outcomes. If you don't know how have the right skills to how to manage that, it's just not going to happen. And if you don't govern it properly, then bad things are going to happen no matter how how good the system is or how good the data is. So I think having those controls in place, having the right skills and then just making sure your data is is that you're relying on for this is accurate are our key principles. No, thanks Sean. I think that that's a great framework and lens to look at this. I think you raised the good point on hallucinations. It's a difficult technology to work with in terms of it being non deterministic and you may get not get the same answer twice. So the governing around that is especially important to get right. So why don't we move on to question, six, which is about how can these challenges to AI adoption that we just chatted through be overcome? And I think, Shatresh, it'd be great to start with you on this one. Thank you. I think that's a million dollar question, isn't it? I'll build my thoughts on some of the challenges highlighted in the previous question because this kind of logically follows the the flow there. If I were to summarize, when I look at the AI adoption and incident response struggles, usually comes down to three things, trust, integration, and culture. And the way we overcome these is through an incremental and disciplined approach. First, I always recommend starting small and showing the value early Rather than trying to automate an entire incident life cycle, we can use cases that are low risk but high impact. In in our case, for example, using AI to deduplicate alerts or enrich an incident ticket with with logs and recent changes really help because these quick wins, you know, that reduce noise, save responders time, and and they immediately build confidence in the end users and build that trust factor. Right? Secondly, you know, and this is really important. I've emphasized it in the previous parts as well that we need to keep humans firmly in the loop. AI should never be a black box making final decisions. It should act as an assistant. If AI suggests a probable root cause or a runbook, responders should validate and execute. That balance of automation plus human judgment not only build trust within the team, it also satisfies a lot of regulatory expectations around accountability. And thirdly, you know, I heard a lot of thoughts around data. I think another key area is explainability. Poor and unstructured data will will really, you know, limit the value of AI. So having clean data is and and better quality data, whether it's a service maps, dependency data, that's foundational. But when when the AI does make a recommendation, I think people need to understand why. For example, you know, this looks like incident x from three months ago because of so and so pattern. That level of transparency builds adoption much faster than a black box answer. And couple of other minor things I I touch upon is, you know, one piece is integration. AI has to fit into the tools that we are using already to support our incident management responses, whether it's ServiceNow, Fusion, any other EGRC platform. If we try to create a separate silo, that's the recipe for disaster in terms of having a widespread adoption. And finally, we can't overlook change management. Often, the resistance is less about the technology and and it's more about the mindset. Giving responders proper training, letting them provide feedback on the AI outputs, and celebrating the early wins, you know, like, even if it's saving fifteen, twenty minutes of the triage, those things make a huge difference. And and, you know, I know in the interest of time, I'll try to summarize that. Start small, keep humans in control, prioritize clean data and explainability, integrate seamlessly, and invest in culture change. That's the recipe for overcoming the common challenges and making AI a trusted enabler rather than a perceived threat. That's great. And I wonder if Sean, if you had thoughts to add. Sure. Just briefly, I I hundred percent agree with you, Trish, on all of those things. I just say in short, just management's gonna be super important. Do you have your guardrails in place? And do people understand how to use those and how to follow those? And if you can get to those things, you know, one of the biggest things that prevent people from adoption do they have good data and do they know how do they have good guardrails that they feel like this is a safe process that they can rely on? If you don't have those things in place, it's just you're not going to have very good success. I appreciate that, Sean. And do realize that we are, got six minutes left in terms of getting some further good insights. I think we've got a whole host of great insights to those folks first six questions, but I wonder if a couple of quick fire insights if you had any Saba to share on question six. Sure. I'll quickly wrap up quickly answer this question. Again, concur with all the panelists here. Data is super important. As you all know, the, you know, the more you practice, the more you train your AI models to practice in lower environment so that you can analyze logs and then that AI can help you detect those weak points from residency angle again, and then help you recommend correct failures scenarios for you to further test and be prepared upon. And then simulate those outages at scale. Follow crawl, walk, run methodology. You don't have to go all in and adopt AI and detect all incorrect weak points and false failures and run into issues. Follow crawl, walk, run. Zafid, over to you, Jack. Just keep a KPI in mind. Right? Focus on your your MTTR, but don't just adopt AI for the sake of adopting AI. You've gotta make sure that it serves your purpose. Like we said right at the start, the environment is really complicated. You need to simplify at all stages. So make sure that what you're implementing is helping you achieve the outcomes that you want in the business. Don't just adopt it because you think it looks good or, you know, because it can end up being a lot more work and maybe not any benefit. Very useful. Thanks Jack. So question seven, we've got five minutes left, so maybe we do also keep this to a relatively quick fire responses from our panelists, but very grateful of what we've shared today. This is an important question. I think the theme has been raised earlier around trust. How do we ensure trust in AI and align with it within a regulatory requirements within, incident response? And I wondered if I would come to you, Sean, first on this. Yes. Similar to what we've been talking about, you know, do you have good data? Do you trust the data? Do you get your do you know where your source of truth is? Are you are you managing that data? And then I'll just say, in short, I think this has been stated before, especially where we are today, is always have a human in the loop. Make sure that you have policies. You're the way you're using AI. You can verify and validate that it's following your standards and policies and then that you always have a human in the loop in the end. Good stuff. And Saba? Yeah. That's very important. Again, from cloud resiliency lead standpoint, I would say your effective incident management will start with designing for continuity and recoverability from day one, understand your design, understand your, you know, your software well, deployment module well, And so that whatever AI is recommending and you you can validate that and ensure you have the trust to implement those strategies. Very useful. And Shatresh? Echoing the earlier thoughts, know, I think from a resilience and risk perspective, auditability is the key. Regulators want to see not only what AI recommended, but also how we humans engaged with it. So my two cents would be keeping a full trail of AI suggestions and the human override along with appropriate rationale would really ensure both trust and compliance from a regulatory standpoint. So I should address and apologies, I don't think we'll get chance to get to much of an audience at Q and A today. I think there's some questions raised and I'll follow-up with Kyle on that. That's how we share maybe some insights with the audience offline from this. But I wondered if Jack you could bring us home with your final comments on question seven before we wrap. I was just going say I agree. Keep people at the front. As long as can demonstrate that there's a person making the final decision at the end and all the AI is doing is providing guidance and support, then I think you're in a fairly good regulatory position.