What Security Leaders Learnt at Gartner SRM 2026 in Sydney: 3 Key Risk Insights
What Security Leaders Learnt at Gartner SRM 2026 in Sydney: 3 Key Risk Insights

It’s been around a week since the Gartner Security & Risk Management Summit 2026 in Sydney, the APAC’s premier gathering for security leaders and risk professionals. I’m only now finding a quiet moment to put thoughts to paper. 

Like clockwork, Gartner delivered: sharp keynotes, rigorous analyst perspectives, and that uniquely “Gartner” blend of methodical, research-backed clarity on how risk is evolving in an AI- charged security landscape. With the current environment shaped by instability, regulation, expanding threats, and AI acceleration, none of us can afford to learn slowly or alone. 

 And yes, the hospitality and food were outstanding. (I won’t pretend that doesn’t matter. It does.) 

But my strongest takeaways didn’t come from the stage. They came from the exhibition floor, the hallway conversations, and the candid exchanges that happen when you’re speaking with everyone from practitioners “in the trenches” to C-level leaders carrying board expectations on their shoulders. Different sectors. Different organisation sizes. Different regulatory pressures and risk appetites. Yet the concerns kept converging. 

What surprised me most was the honesty: more than a few leaders admitted they’re aiming for the bare minimum to pass audit. I get it – budgets are tight, talent is scarce, and the work never stops. But it raises a question we should be brave enough to ask:  

Is “bare minimum” actually minimum viable

Because “passing audit” is not the same thing as “being resilient.” 

From dozens of conversations, three themes kept surfacing – and they’re worth unpacking. 

1) Supply Chain Risk: “Why Does Onboarding Take 7–8 Months?” 

This wasn’t a niche pain point. It was widespread, and the frustration was palpable. 

A consistent story emerged: speed of onboarding vendors is not ideal, with some leaders calling out 7–8 month onboarding cycles as a real barrier to delivery and innovation. 

And when onboarding drags, teams do what teams always do under pressure: they route around the process. Shadow IT creeps in, contracts get signed before risk is understood, and security becomes a late-stage negotiator instead of an early- stage enabler – a familiar challenge when supply chain cybersecurity risk isn’t continuously managed.” 

The pattern I heard repeatedly:  

  • Onboarding is manual, document-heavy, and repetitive. 
  • Evidence collection is point-in-time, and quickly stale. 
  • Risk decisions aren’t consistently traceable, especially across business units. 
  • The result is either a bottleneck… or a bypass. 

The “factory system” idea that came up 

One phrase kept recurring in different forms: we need a “factory system.” 

Not a factory in the cold, industrial sense – but a repeatable, automated, measurable pipeline for third-party onboarding and ongoing assurance, with: 

  • Automation (intake, triage, evidence requests, reminders, scoring) 
  • Standardisation (patterns by vendor type, data classification, criticality) 
  • Continuous monitoring (because a yearly PDF does not equal assurance) 
  • Risk signal integration (security posture signals, incident intel, changes in scope) 

In plain terms: treat third-party risk like modern engineering treats deployments – pipeline driven, instrumented, and observable. 

If your vendor lifecycle is still built like a paperwork queue, it will always be slow. And in 2026, slow isn’t just inconvenient  it’s risky. 

2) AI Is a Double-Edged Sword (and Everyone Knows It) 

AI adoption is clearly accelerating , not because it’s trendy, but because leaders are under intense pressure to realise value through AI‑powered security operations.  That push is real, and it’s not going away. 

But the tone of the conversations was pragmatic, not starry eyed. Leaders  kept returningto the same tension:

“We want the upside, but we don’t want to create a new category of unmanaged risk.” 

What I heard most often 

  • Ethical implementation concerns are persistent (and increasingly board-visible). 
  • AI introduces new security threats and new attack surfaces. 
  • The pace of adoption is outstripping the pace of governance in many organisations. 
  • Third-party AI (embedded in SaaS, platforms, providers) is creating “hidden AI” risk. 

And of course, there’s the reality that attackers don’t need permission to innovate.

Governance and guardrails (ISO/IEC 42001 keeps coming up) 

A recurring anchor point in conversations was the need for formal governance and guardrails, with leaders specifically pointing to standards such as ISO/IEC 42001 as a way to structure a risk-reduction approach to AI usage. 

Not because standards magically solve problems – but because they create a shared language across security, risk, compliance, and the business: 

  • What AI systems exist? 
  • What are they allowed to do? 
  • What data do they touch? 
  • How do we monitor drift, misuse, or unintended outcomes? 
  • Who is accountable? 

My take: AI risk isn’t “a security problem” or “a compliance problem.”  

It’s an operating model problem.  

And operating model problems don’t improve through optimism., They improve through guardrails, feedback loops, and clear accountability. The fact that Lumen is one of the few companies globally to have achieved this prestigious ISO 42001 certificate led to more questions around learnings and how organisations can replicate this in practice. 

3) Tech Debt + Budget Cuts: “Do More With Less” Meets Reality 

If there was one theme that showed up in nearly every sector, it was this: 

Leaders are expected to reduce budget while maintaining (or improving) risk posture. 

That’s not a strategy. It’s a constraint. And constraints force choices. 

What leaders are doing about it 

Many organisations have kicked off initiatives around: 

  • Tech debt reduction 
  • Controls consolidation 
  • Vendor rationalisation / platform consolidation 
  • All in the name of cost reduction without collapsing risk coverage. 

In theory, this is sensible. In practice, consolidation can create concentration risk if it’s done without architectural discipline and resilience planning. 

AI agents are being trialed, especially in SecOps 

A particularly candid thread: AI agents in SecOps are being experimented with to reduce headcount pressure, including: 

  • Assigning SOC Tier 1 / Tier 2 SecOps tasks to AI agents (triage, enrichment, correlation, first response)
  • Using AI agents for effort-intensive but non-critical work:  
  • Data crunching 
  • Report formatting 
  • Stakeholders Coordination 

This is where I’ll offer caution, not a criticism: automation is not the same thing as accountability. 

If an AI agent closes an incident incorrectly, suppresses a signal, or escalates too late – who owns the outcome?  

If it touches production systems, what are the guardrails? Is there a “kill switch”? Is there audit-grade logging? Is the agent’s access tightly constrained? 

 The organisations that succeed  won’t be the ones that replace people with agents. They’ll be the ones that industrialise decisioning safely, with: 

  • Bounded authority 
  • Strong change control 
  • Clear escalation paths 
  • Measurable performance 
  • Continuous tuning 
  • Human oversight where it matters most 

Final thought: The “bare minimum” era is ending, whether we like it or not 

Across all three themes , supply chain risk, AI risk, and tech debt under budget pressure – one truth emerged: 

The cost of superficial assurance is rising —because resilience must be practised, not promised. 

You can pass an audit and still be fragile.  

You can deploy AI and still be unsafe.  

You can consolidate tools and still lose visibility. 

If you’re a leader who recognised these concerns in your own organisation, let’s stay in touch and unpack them further. The point isn’t to admire the problem — it’s to build approaches that actually work: 

  • A factory model for third-party onboarding and continuous assurance 
  • AI governance that is practical, measurable, and adopted (not just documented) 
  • Tech debt reduction that improves resilience rather than quietly weakening it 

Ultimately, let’s build an approach to address these issues together. You are not alone.  
 

Build resilience beyond audit checklists 

Security and risk leaders across APAC are rethinking how they manage thirdparty risk, AI governance, and resilience under constant pressure. 

At Lumen, we work with organisations to bring visibility, intelligence, and operational discipline through our security services into complex risk environments — helping teams move from pointintime assurance to continuous confidence. 

Find out more at  Lumen Security Services or contact us at apac.mail@lumen.com to kickstart a conversation.     

Connect with the author on LinkedIn . 

This content is provided for informational purposes only and may require additional research and substantiation by the end user. In addition, the information is provided “as is” without any warranty or condition of any kind, either express or implied. Use of this information is at the end user’s own risk. Lumen does not warrant that the information will meet the end user’s requirements or that the implementation or usage of this information will result in the desired outcome of the end user. All third-party company and product or service names referenced in this article are for identification purposes only and do not imply endorsement or affiliation with Lumen. This document represents Lumen products and offerings as of the date of issue. Services not available everywhere. Lumen may change or cancel products and services or substitute similar products and services at its sole discretion without notice. ©2026 Lumen Technologies. All Rights Reserved.


Related Post