The Hidden Costs of Downtime in Financial Services
We surveyed executives across the Global 2000 to explore the depths of downtime: what it costs, what causes it, and what leading organizations are doing right. Here, we highlight findings from respondents in the Financial Services industry.

Downtime can come from anywhere
Downtime isn’t just an ITOps or engineering issue. It’s a security one as well. Understanding the most common culprits can help companies manage incident response and possibly prevent lightning from striking twice.
The Global 2000 confirmed downtime’s dual origins: 56% come from security incidents while 44% stem from application or infrastructure issues.
Cybersecurity-related human error is “often” or “very often” the offender, according to 55% of Financial Services organizations. Other top cybersecurity-related downtime causes include:
- Malware attacks (43%)
- SaaS/third-party application issues (28%)
Cybersecurity-related human error results in the most downtime — and takes the longest to detect and remediate.
18 hours average mean time to detect (MTTD)
61 hours average mean time to recover (MTTR)
Tackling downtime with smart technology investments
Financial Services organizations spend a combined $30.7M on cybersecurity tools ($16.3M) and observability tools ($14.4M) annually — less than the combined average across all industries ($43.3M).
No other technology has made as much of a splash recently as generative AI: 67% use discrete generative AI tools (e.g., ChatGPT) to address downtime.
Meanwhile, 56% use generative AI features embedded into existing tools, such as AI assistants that help write queries and troubleshoot.
Financial Services institutions’ toughest obstacles to managing downtime
79% say data sprawl

51% say too many false positives/alert fatigue
