Insider Threat, Human Risk, Human Error or Humans Being Humans

The Lockdown Approach: A Story of Fear Over Trust

During an interview, someone told me about their company’s approach to phishing simulations: Three failed phishing tests, and your email gets blocked—completely locked down. No access. No exceptions. This is the kind of company that sees humans as threats, risks or errors. They believe the biggest cybersecurity risk is their own employees, and their solution is punishment and restriction. Their approach assumes that people make mistakes because they don’t care or aren’t paying attention, so the fix is to control, limit, and reprimand.

But is that really the right approach?

Humans Aren’t Threats, Risks or Errors —They’re Humans being Humans

Mistakes happen. People fall for phishing emails. They reuse passwords. They get distracted. But treating humans like threats, risks or errors rather than partners in security is a failure of leadership, culture, and change management.

That doesn’t mean we should excuse every mistake or ignore risky behaviour. Compassion doesn’t mean complacency. But the way an organisation responds to humans being humans determines whether employees become security allies or adversaries.

Security culture is critical here. Organisations that build a culture of fear—where mistakes lead to punishment—create employees who hide mistakes, disengage from security, and find ways to work around controls.

Organisations that treat security as a shared responsibility—where employees are supported, educated, and engaged—create an environment where security improves because people want to be part of the solution.

The Security Landscape is Changing—Are You Keeping Up?

Let’s face it: the digital and threat landscapes have evolved at an unprecedented pace. Attackers are using AI to craft sophisticated phishing emails, deepfake voices are being used for fraud, and new privacy-invasive apps emerge constantly. If organisations aren’t adapting, they’re falling behind.

To keep up, organisations need to ask:

1. Do You Have a Digital Literacy or AI Hygiene Programme?

Security training isn’t just about phishing emails anymore. Employees need education on digital literacy, AI-generated threats, and the evolving ways attackers manipulate human behaviour.

2. Are You Keeping Up With Emerging Cyber Risks and Trends?

Security teams should be tracking trends—not just in threats but in the tools people use daily. Are employees using AI assistants with poor privacy protections? Are new scam tactics being used in your industry? Keeping employees informed helps them make better decisions.

Documenting Human Actions: The Missing Security Asset

We classify and manage security risks related to infrastructure, applications, and vendors—but what about human actions? If a misconfiguration in the cloud gets documented as a risk, why wouldn’t a repeated risky behaviour be tracked in the same way?

Organisations should map human actions—whether intentional, accidental, or negligent—just like they map technical risks. If we align with frameworks like NIST CSF, we should consider:

• Is it role-specific?

• Does this risk apply to specific job functions (e.g., finance employees being targeted by BEC scams)?

• Is it preventative or detective?

• Should we prevent the action (e.g., blocking risky sites) or detect and respond to it (e.g., monitoring for data exfiltration)?

• What controls are in place?

• Training, policies, tools, automation—what exists to reduce this risk?

• How does risk likelihood vary across the company?

• A mistake in one department might be low-risk, but in another, it could be catastrophic.

• Are we covering this risk adequately?

• Or is our approach just security theatre?

Balancing Security and Culture

Managing humans being humans isn’t just about preventing incidents—it’s about how organisations engage with their employees in the process. The difference between a security-aware workforce and a disengaged one often comes down to culture.

Do you want a company where employees see security as an enforcer that punishes mistakes? Or do you want one where they feel empowered to take action and improve security every day?

The choice is yours. So what are you going to call your employees risks, threats, errors or something nice like partners?