On April 17, 2026, the Fed, OCC, and FDIC issued revised guidance for model risk management for the first time in 15 years. SR 26-2 replaces SR 11-7 — the guidance that has governed how every major bank thinks about models since 2011.
Footnote 3 is a line that proves interesting "Generative AI and agentic AI models are novel and rapidly evolving. As such, they are not within the scope of this guidance. Nonetheless, a banking organization’s risk management and governance practices should guide the determination of appropriate governance and controls for any tools, processes, or systems not covered in this document. However, the principles described in this guidance apply to traditional statistical and quantitative models and non-generative, non-agentic AI models."
The gap the guidance left
The Fed didn't say agents are unregulated. It said the opposite.
The footnote instructs banks to apply their own risk management and governance practices to any system the framework doesn't cover. The principles of SR 26-2 still apply. The prescription doesn't.
So every bank deploying agentic AI now has to extrapolate a doctrine built for quantitative models into a category the regulator admits is moving too fast to codify.
Human-in-the-loop isn't a feature. It's control.
The answer to the regulatory gap isn't to pull back on agentic AI. It's to build the scaffolding SR 26-2 would have required if it had covered agents.
That scaffolding has a shape the Fed has been describing since 2011:
- Humans in the loop at every material decision point, with clear accountability
- Ongoing monitoring of agent behaviour against expected outcomes
- Indelible documentation of actions, recommendations, responses, and exceptions
- A model inventory that actually knows what your agents are doing and where
- Independence between the system executing the work and the system validating it
Every item describes an orchestration layer — not an agent, not a chat channel, not a ticket. An independent execution environment where humans, automations, and agents operate in the same space, under the same controls, with the same audit trail.
For me that's what SR 26-2 is pointing at, even if it doesn't say so by name.
Cutover’s platform and Respond was built for this
Every task in a Cutover runbook has an owner, a sequence, a timestamp, and a status. Humans are in the loop by design — Major Incident Managers direct, Resolvers execute, executives self-serve updates without pulling the team off resolution.
When an AI agent runs a task, it runs it inside the runbook, alongside humans. Not as something sitting next to the process.
The agent does the work. The next human in the sequence sees exactly what the agent did, when, and with what result.
No agent, automation or human has to mark its own homework.
Cutover is agnostic to who — or what — executes a task. It enforces governance on all of them.
Four things follow from that design, each of which maps directly to what SR 26-2 will force banks to evidence:
1. Audit-ready by default. The execution trail is a byproduct of the work, not a reconstruction from chat apps after the fact. When your regulator asks what your agents did during the last P1, you have an answer — not an export.
2. Constrained data access. Cutover consumes enterprise data at the point of execution only. No scraping of proprietary information into a foundation model. Agentic capability without the data exposure risk — a real concern at every Tier-1 we work with.
3. Outcomes analysis built in. Every runbook execution produces structured performance data. The kind of "what the world actually did" record that no log or ticket system captures.
4. Effective challenge at the orchestration layer. Verification doesn't live inside the agent. It lives in the layer above and around it. That's where independence comes from — and that's exactly what the Fed means by effective challenge.
The proof is already in production
This isn't theory.
A leading global bank ran over 100 live incidents through Cutover Respond in their first year and reported a 28% improvement in MTTR. A leading global financial services firm has set a three-year target of 95% of major incidents resolved by lights-out automation and agents — with Cutover as the orchestration foundation.
Cutover spent a decade orchestrating IT Disaster Recovery for the world's largest banks — demonstrating ~53% faster recovery — before applying the same logic to major incident management. The regulatory pedigree is already there.
The takeaway
SR 26-2 is the regulator saying the quiet part out loud. Agentic AI is outside the current framework but still inside the risk perimeter. Banks are going to be judged on the governance they built themselves — not the template they were handed.
Build the governance now. Build it in the orchestration layer.
Don't trust agents to mark their own homework.
See how Cutover Respond puts humans, automations, and agents in the same governed execution environment: cutover.com/book-a-demo
