Learn More

Discover how Recovery Optimization, Fusion's latest patent-pending innovation, can transform your IT disaster recovery program

Post icon Blog
November 25, 2025

AI and Cyber Resilience: Rethinking Risk in a Systemically Disrupted World

We’re living in an environment defined by constant change and technological acceleration. Traditional risk frameworks built to manage isolated incidents no longer capture the full picture. Disruption today is systemic, spanning cyber incidents, geopolitical shifts, and supply chain failures that cascade across organizations and industries. 

In my experience, this interconnectedness has changed how we think about a successful operational resilience framework. Today, effective resilience requires seeing how risks connect and change, not just addressing them in isolation. With artificial intelligence (AI) transforming how we see and respond to risk, resilience means combining human judgment with intelligent, adaptive insight.

AI’s Accelerating Impact on Businesses 

AI isn’t new, but the conversation around it has changed dramatically in just a few years. When platforms like ChatGPT launched in 2022, they brought AI out of research labs and into daily business conversations.  

What started as narrow, task-based automation has quickly evolved into discussions about artificial general intelligence (AGI) and, more recently, Agentic AI, referring to systems that can act independently without direct human input.  

As someone deeply involved in this space, I see incredible potential but also growing complexity. Innovation opens doors but it also introduces new risks. To stay resilient, organizations must look ahead, prepare for change, and build the capabilities to adapt quickly. 

Data: From Asset to Obligation 

I often tell executives that data is both an organization’s greatest strength and its greatest vulnerability. It’s what drives insight, efficiency, and innovation, but without strong governance, it can quickly become a liability. Poor data controls don’t just compromise trust; they expose companies to privacy breaches, regulatory penalties, and long-term reputational harm. 

AI has made this even more important. It’s not enough to see data as a tool; it must also be treated as a responsibility. Many organizations are already building governance frameworks and training programs to support responsible AI, but data protection and ethics can’t just be compliance checkboxes. They need to be foundational principles and part of how the organization operates every day. 

Evolving Regulations and Growing Expectations 

Regulators around the world are catching up to the pace of AI innovation. In the past two years, we’ve seen the introduction of the EU’s AI Act, the U.S. NIST AI Risk Management Framework, and growing global momentum around transparency, accountability, and system safety. 

These frameworks share a common theme: they expect organizations to manage AI responsibly, with clear oversight and traceability. AI can deliver enormous value, but without strong controls, it also brings risks like bias, hallucination, and unpredictability. Balance is key. 

Five Principles for Responsible AI (FAAST) 

From what I’ve seen in real-world adoption, organizations need practical guidance, not just policies. That’s why I recommend using the FAAST framework: 

  • (F)ree of Bias: Validate data inputs rigorously to avoid misinformation and skewed outcomes. 
  • (A)ccountability: Keep a human in the loop and define ownership for every decision. 
  • (A)lignment: Ensure AI supports your organization’s mission, values, and goals. 
  • (S)ecurity: Apply zero-trust architecture and strong authentication controls. 
  • (T)ransparency: Make systems traceable and explainable so decisions can be understood. 

Strategic Steps Forward 

To harness AI safely and reduce cyber risk, organizations need to take deliberate action, including:

  • Adopt Zero-Trust Security: Verify every user and device, every time. 
  • Shift Toward Proactive Defense: Anticipate threats instead of reacting to them. 
  • Use AI in Defense: Let machine learning detect anomalies faster than legacy tools can. 
  • Collaborate Broadly: Share intelligence across sectors and regulatory bodies. 
  • Train Continuously: Build a workforce that understands both the power and the risk of AI. 
  • Keep Innovating: Continue investing in research and development (R&D) to evolve defenses as fast as attackers do. 

These aren’t one-time projects but ongoing practices, supported by resilience software, that must become part of how resilience practitioners operate day to day.  

What Questions Should Boards Be Asking? 

Boards now have a critical role in shaping AI governance. Their oversight must extend beyond compliance to include alignment, risk management, and trust. The questions they should ask include: 

  • Does our AI strategy align with business objectives and risk appetite? 
  • Are our security and governance frameworks strong enough for AI-scale threats? 
  • Can we confidently explain and defend how our AI systems make decisions? 

In my view, these questions aren’t just about cybersecurity, but about long-term operational resilience and organizational integrity. 

Balancing Innovation and Responsibility 

Integrating AI is no longer optional, but doing so without clear guardrails and accountability introduces real risk. The organizations that thrive will be those that innovate boldly and responsibly; those that combine progress with principle. 

As I see it, the future of operational resiliency isn’t just about deploying technology faster; it’s about deploying it smarter. So, I’ll leave you with one question that continues to shape how I think about this space: will AI help us do things better, or help us do better things?