Are Humans Really the Weakest Link?
My attempt at dispelling this notorious rhetoric, and an introduction to human-centered security.
For as long as cybersecurity has existed, we’ve repeated one line like gospel:
“Humans are the weakest link.”
We’ve even made cybersecurity marketing and product playbooks just to fit this rhetoric.
It’s said with a mix of conviction, some frustration, and resignation, as if the problem begins and ends with the user who clicked the phishing email or reused their password.
But lately, I’ve been rethinking that phrase, because maybe the problem isn’t the people.
Maybe it’s the way we design for them.
Join a vibrant cybersecurity community of over 7000 people who are constantly engaging in conversations and supporting one another, covering topics from cybersecurity and college to certifications, resume assistance, and various non-professional interests like reading, fitness, finance, anime, and other exciting subjects.
A Flawed Premise
I’ve been reading (and have since paused) Don Norman’s The Design of Everyday Things. This is one of the most influential books on human-centered design.
Norman shares a story about investigating the Three Mile Island nuclear accident. The operators were blamed for “human error,” but as his committee later found, the real failure was design.
The control panels were so poorly laid out that it was inevitable that the wrong actions would be taken.
That story struck me hard because the same logic applies to cybersecurity. Hence, the reason why I’m writing this.
When we blame users for being “the weakest link,” what we’re really saying is:
We designed a system that expected humans to behave perfectly…and they didn’t.
What Human-Centered Design Teaches Us
Norman’s philosophy is simple yet radical:
“It is the duty of machines and those who design them to understand people. It is not our duty to understand the arbitrary, meaningless dictates of machines.”
Human-centered design starts from understanding humans.
It’s the belief that systems should be built around people, their capabilities, limitations, and behavior, not the other way around.
Now imagine applying that same principle to cybersecurity.
What if security were an enabler of trust, confidence, and resilience?
The Essence of Security
I’ve been obsessed with word etymology recently and decided to find out the etymology of the word “security”.
Per etymonline (as recommended by Gemini), the word "security" originates from the Latin sēcūritās, meaning freedom from care, apprehension, or danger, derived from sēcūrus ("safe" or "without care").
It combines se- (without) and cura (care/concern), emerging into Middle English as securite in the early 15th century to describe a state of safety.
So, what this means is that security, at its core, is supposed to mean freedom from care. Freedom from anxiety. A state where you’re not constantly thinking about what could go wrong.
But that’s not what most people experience.
What they experience is friction. They experience getting locked out of their own accounts. They experience clicking through prompts they don’t fully understand. They experience being told to “be more careful” in systems that never really help them be.
Somewhere along the way, security stopped feeling like safety and started feeling like responsibility. And we handed that responsibility to the user.
That’s the part I keep coming back to, but that’s a letter for another day.
What would it look like if we actually took that definition of security seriously?
Human-Centered Security
If security is supposed to feel like freedom from care, then what we build shouldn’t feel like something users have to constantly fight through. It should feel like something that quietly supports them.
That’s where I think human-centered security comes in.
Human-centered security, at least to me, isn’t about adding more shiny controls or features. It’s about rethinking security as an experience.
Reality shows that a well-designed system doesn’t rely on people memorizing policies or sitting through another awareness training session. It makes the right action obvious through its design. The experience aligns with what security is actually supposed to feel like: a sense of safety, not constant friction.
That distinction matters because if we over-index on the feeling of security without grounding it in reality, we end up with something worse than insecurity, a false sense of security.
And we know how dangerous that can be.
The better approach is simpler, but harder to execute. It’s the same principle you see everywhere else in good design, where you don’t need a manual to use a well-designed door. You don’t need a checklist to move through a clean interface. The design communicates intent. It guides behavior and meets you where you are.
Security should be doing the same thing.
Which means the questions start to change.
Not “how do we make users comply,” but how do users naturally understand what’s secure? How do we communicate risk in a way that actually lands, without overwhelming them? How do we make the secure choice the path of least resistance instead of the most difficult one? And how do we make security feel like an enabler instead of something that’s constantly in the way?
These questions are important because the reality is, people are going to make mistakes. That’s not a flaw in the system. That is the system.
Good design anticipates that. It doesn’t pretend it won’t happen. It builds around it.
In a security context, that means assuming an error will occur and designing for it anyway. It means putting guardrails in place that prevent small mistakes from turning into major incidents. It means creating feedback loops that actually teach and guide rather than punish people after the fact.
Once you start thinking about it this way, the “weakest link” framing starts to fall apart.
We start to undo this quiet assumption that’s shaped security for years. This bad assumption that the relationship between humans and security has to be adversarial, or that the user is the problem to be controlled.
It doesn’t have to be that way.
When you design with people in mind, the relationship shifts and becomes collaborative. The system supports the human, and in turn, the human strengthens the system.
The Real Weak Link
Every incident I’ve ever investigated has reinforced a humbling truth:
The system always works exactly as designed; it was just used for a different purpose or via a different mechanism than intended.
This is the part we tend to move past too quickly because, when you really sit with it, much of what we label “human failure” doesn’t actually start with the human. It starts with the environment we placed them in.
If a developer can’t realistically follow an IAM policy because it’s too complex to reason about in the flow of their work, that’s not a training gap. If a phishing simulation leaves employees feeling embarrassed instead of better equipped the next time around, that’s not awareness. If a password rotation policy leads to credentials being written down and hidden under keyboards, that’s not defiance.
Those are signals.
Signals that the system, as designed, is asking people to operate in ways that don’t align with how they actually think, work, and make decisions.
And this is where I think security always loses the plot.
We’ve spent years optimizing for technical correctness, tighter controls, more coverage, and logical completeness. But in doing so, we’ve overlooked something, the fundamental fact that humans don’t operate on logic alone.
They operate on trust, intuition, emotion, and habit.
And when those realities collide with systems that weren’t designed with them in mind, something has to give.
Most of the time, it’s the human who gets blamed.
But if we’re being honest, the system did exactly what it was built to do.
And until security starts accounting for that, we’ll keep building mechanisms that look strong on paper but end up working against the very people they’re supposed to protect.
AI, for humanity?
I believe this is where AI starts to become interesting and useful in a very different way. Not just as a tool for detection or automation, but as a bridge between systems and people.
For the first time, we have systems that can adapt more closely to how humans think, rather than forcing humans to adapt to rigid machine logic.
And if that’s the direction things are moving, then the role of the security professional has to evolve with it.
It’s no longer enough to just understand systems. You have to understand people, their thought processes, how they make decisions, and how trust is formed and broken. The Social Engineering Community already has a head start on this.
As AI continues to accelerate the technical side of security, more of the differentiation will come from understanding the human side. Psychology, behavior, communication, and all areas that security has historically treated as secondary will become core to how effective systems are designed.
Rethinking Security Through Human-Centered Design
Human-centered design doesn’t excuse mistakes or assign blame to humans; it approaches design with them in mind.
It utilizes iteration, observation, and empathy to design systems that adapt to human behavior, rather than requiring humans to adapt to systems.
So maybe the next evolution of cybersecurity isn’t just about AI-driven detections or zero-trust architectures.
Perhaps it’s also about human-centered security, which builds systems designed to work with people, not against them.
The real weakest link isn’t the human. It’s our failure to design for humans.



