DORA has quietly done something important: it turned “vendor management” into a front-office availability problem.
The European Supervisory Authorities (ESAs) have now designated the first set of Critical ICT Third-Party Providers (CTPPs) under DORA, and ESMA has been highlighting this as a key market governance step.
This is the regulator putting names on the dependency reality we all already live with.
So what?
If you run a trading desk, you don’t experience outages as “ICT incidents”.
You experience them as:
no prices
no orders
no hedges
no client comms (or no recorded comms)
no access (SSO/MFA issues)
no visibility (monitoring and incident tooling also impacted)
And recent events keep proving the point:
Bloomberg Terminal outage (May 2025) disrupted access to live pricing and delayed sovereign debt auctions.
Cloudflare outage (5 Dec 2025) briefly hit major services including LinkedIn and Zoom.
Different providers, same lesson: shared infrastructure creates shared failure modes.
What does “CTPP” actually change?
Two big shifts.
Oversight stops being theoretical
CTPP designation is the mechanism that brings certain ICT providers into direct EU-level oversight under DORA’s Oversight Framework. That’s the whole point of the list.
Expectations harden for the firms that rely on them
Even if the regulator is “overseeing the provider”, you still own your operational resilience.
In practice, this accelerates pressure on:
contracts (audit/access rights, subcontractors, resilience obligations)
incident readiness (notification timelines and coordination)
concentration risk (where you have no meaningful alternative)
exit (tested, evidenced, timebound)
The awkward bit: your execution stack is intertwined
Most firms still think in applications and vendors.
Regulators think in services and systemic dependency.
Trading reality is worse: your dependencies cross-cut everything:
OMS/EMS
market data and pricing
venue connectivity
identity and access management
desktops/VDI
cloud regions and core services
comms, recording, surveillance tooling
So the right unit of analysis isn’t “System A” or “Vendor B”.
It’s the trading workflow.
The only approach that scales: journey-first mapping
If you start with org charts, you’ll produce a diagram that looks professional and tells you nothing when something fails.
Start with execution journeys instead:
RFQ to fill (credit)
streaming / axes workflow
algo order lifecycle
hedge workflow
price discovery and validation under stress
allocations and post-trade exception handling
Then, for each journey:
map the dependencies end-to-end
define failure modes (down vs degraded)
set RTO/RPO by workflow step
document fallbacks (manual, alternate provider, or stop-trading conditions)
If you can’t simulate a provider outage, you don’t have a plan. You’ve got a hope.
And if your “exit plan” is a PDF, it’s not a plan, it’s stationery.
What “good” looks like (and what regulators can actually believe)
A credible resilience posture in 2026 looks like:
10 journey maps that reflect how the desk trades
a dependency graph tied to those journeys (not a vendor list)
RTO/RPO by workflow, signed off by the business
at least 3 scenario drills run and documented (with actions closed)
an evidence index that makes it easy to prove control fast
Not perfect. Just real.
Where FINOV8 fits
FINOV8’s Execution Resilience sprints focus on the bit most firms don’t have: the trading-native journey model.
We then map it into your existing system of record (ServiceNow / Archer / your current tooling) so it stays alive, auditable, and usable during an incident.
If you’re tackling DORA and the CTPP implications this quarter, message me. We’ll pressure-test the workflows, not the slide deck.

