Oops… We Did It Again (And Again): Why DO We Keep Forgetting the Human Side of Cybersecurity

In 2000, Bruce Schneier started saying what many were thinking: People are the weakest link in information security

Since then people have been called:

  • Weak Links

  • Risks

  • Errors

  • Threats

  • Human firewalls

We heard it all. And then… we realised something.

We’d forgotten to teach people how to use technology and the internet safely.
Actually, we never really taught people how to use technology at all.

We handed over devices, platforms, and systems — and expected everyone to just figure it out.
No guidelines. No foundations. No mental models.

That’s why when things went wrong it was easy to blame the humans?

So we scrambled.
We made click-through e-learning modules (yawn).
We forgot about memory, motivation, and how humans actually learn.
We skipped behavioral science and how humans behave.
We ignored cognitive load theory.
We bypassed habit formation.

Then came the cyberpsychologists and Behavioural Scientists

  • Tabletop exercises

  • Problem-solving games

  • Escape rooms

  • Scavenger hunts

And it worked. Sort of. We’re still figuring that part out.
But progress was being made.

Enter AI. Exit common sense.

Now, AI is off to the races.
Productivity! Innovation! Speed!

And what are companies and media saying?

“If you don’t use AI, you’ll be replaced by someone who does.”

Suddenly, everyone’s scrambling again — not to teach people, but to push adoption at all costs.

But hold on… haven’t we forgotten something?

Oh right — the security part.
And the human part.
Again.

This time, it might be worse.

We can’t do a data access request on ChatGPT.
We don’t know what it remembers.
And a surprising number of people don’t even know what an LLM is — or what it means to paste customer data into a public chatbot.

We forgot to teach the fundamentals. Again.

No context.
No mental models.
No boundaries.

I’m not a privacy expert. But maybe I don’t need to be one to see the long-term risk.

We’re repeating the same cycle.
Just with shinier tools, faster deadlines, and bigger stakes.

So now what?

We need to stop treating awareness like a checkbox — and start building understanding.

Security and privacy need to be:

  • Embedded into learning and culture

  • Rooted in real-world context

  • Powered by psychology and behavior science

  • Measured meaningfully — not just tracked by clicks

And as we move into an AI-driven world, we need:

  • AI awareness — so people understand the capabilities and limitations of these tools.

  • AI digital literacy — so they can use them safely, ethically, and effectively.

  • AI governance — so organizations can balance innovation with accountability and transparency.

Let’s teach people how to think, not just comply.
Let’s give them the tools to adapt, question, and protect — not just follow instructions.

Because if we don’t?

Well…
Oops, we’ll do it again.