Cybersecurity Engineering Career Lessons You Won’t Get in Certification Courses
Lessons in Security Engineering from Jason Chan’s High-Velocity Framework (and My Own Battles in the Field).
A few months ago, I read Jason Chan’s feature in the TL;DR sec newsletter titled "Security for High-Velocity Engineering." If you’ve ever built security tooling or led engineering efforts under pressure, the piece might feel less like a framework and more like a mirror.
It’s a strategic blueprint from someone who helped Netflix scale security without slowing the company’s engineering engine—a balance that’s notoriously hard to get right.
It got me thinking about my own path as a Security Engineer and the various domains I’ve worked in.
What stuck with me wasn’t just Jason’s structured layers of context, strategy, and execution, but how those principles have shown up in my work again and again.
Sometimes explicitly. Sometimes, without realizing it until later.
So in this issue, I want to do two things:
Walk you through a few pivotal moments in my security engineering career
Extract real, applicable lessons from each so that you can bring them into your own context
This is not just “what I did.”
It’s “why it mattered, and what it could mean for you.”
Join a vibrant cybersecurity community of over 7,000 people who are constantly engaging in conversations and supporting one another, covering topics from cybersecurity and college to certifications, resume assistance, and various non-professional interests like fitness, finance, anime, and other exciting subjects.
Watch the full video breakdown below:
When Speed Is the Priority
Jason discusses how security teams must make room for velocity, not hinder it.
That means knowing when to optimize for “good enough right now” versus “perfect later.”
That lesson hit me hard during an active cloud campaign while I was a cloud threat detection engineer.
We were watching an attacker group compromise cloud environments at scale, deploying crypto miners using consistent IAM roles and resource naming patterns.
We had clear indicators, but no time for elegant detection modeling or layered behavioral TTP-based detections.
So I made the call to build and deploy an IOC-heavy detection.
The logic was sharp but narrow. It was very atomic in nature.
Honestly, it wasn’t what I’d want to ship under normal conditions. But it bought us time.
Visibility first. Accuracy later.
What I learned (and you can apply):
There is a time and place for precision, but don’t let the perfect detection delay critical visibility.
Your first iteration (for an atomic detection) doesn’t have to scale. It has to inform.
Build something quickly, but tag it for review. Revisit it once the fire’s out.
Sometimes, shipping quickly is the safest thing to do.
And this applies far beyond cloud detection.
Whether it’s building IAM policies, hardening containers, or writing security automation, sometimes velocity is the first layer of resilience.
Build Once, Scale Forever
Jason describes “strategy” as choosing high-leverage work that scales your impact.
When I was tasked with building detections for Google Cloud (GCP) as a cloud threat detection engineer, I could have started cranking out detections service by service.
But I knew that wouldn’t scale, and frankly, I didn’t fully understand GCP yet. Instead, I stepped back and developed an understanding of the cloud provider and a cloud threat modeling framework.
I grouped GCP services into high-level domains - Compute, storage, IAM, databases, and then used those abstractions to understand attack surfaces.
From there, I could drill down into the specifics:
What does identity misuse look like across IAM and storage?
How does lateral movement manifest in GCP’s managed services?
How are GCP services abused differently or similarly to other Cloud services?
What I learned (and you can apply):
Before building tooling or detections, have a mental model of the domain. It doesn’t have to be perfect—it just has to make complexity navigable.
Focus on identifying common patterns across services rather than isolated exceptions. This is where reliable detection logic starts.
If you’re onboarding into a new cloud provider, threat modeling isn’t a “nice to have”—it’s your best shot at building a reusable, scalable detection strategy.
Documentation is a weapon. When you build mental models, write them down. It’ll save the next person weeks of confusion—and save you from repeating work six months later.
If you’re early in your career, this is gold: don’t just solve the problem, try to solve the class of problems.
Why Saying “No” is Strategic
One of the sharpest lines from Jason’s piece comes from Netflix’s former CEO, Reed Hastings:
“Strategy is about what you don’t do.”
In my current role, I work on differentiated threat intelligence for specific business units.
One thing I’ve learned the hard way is that not all intel is valuable, even if it’s technically correct or urgent elsewhere.
The internet is full of noise: threat feeds, CVEs, indicators, and intel reports. But my responsibility is to ask, “Is this relevant to us?” If not, I let it go—even if it seems scary or hyped.
What I learned (and you can apply):
Intelligence without context is just a distraction.
Learn to filter based on business impact and sector alignment, not just raw severity.
It’s okay to ignore the noise—especially if you’re building a detection or IR strategy. Not every problem is your problem.
Write down your priorities or “filter criteria.” That makes your decisions repeatable and teachable.
This applies equally across the SecOps team: don’t get caught writing detections or chasing rabbit holes just because something trended on Twitter.
Institutional Memory is Security Engineering’s Secret Weapon
Chan stresses that true scalability requires building systems people can use long after you’re gone.
That lesson came alive for me recently.
There were two major internal systems my team was responsible for—critical to detection, response, and threat intelligence—but they were barely understood.
No central knowledge. No reusable playbooks.
So, I took it on: I studied the systems, documented their behaviors and our failure points in our interactions with them, hosted a team lunch-and-learn, and built an internal wiki that others could use.
What I learned (and you can apply):
Documentation isn’t a chore—it’s a force multiplier.
If your knowledge lives only in your head, your impact dies when you take PTO.
Teach others how to operate what you’ve built. That’s when you know you’ve succeeded.
Institutional knowledge is a competitive advantage—don’t let it go to waste.
If you’re early in your career, this is one of the fastest ways to stand out:
Be the person who makes the unclear clear.
Final Thoughts
Jason Chan’s framework gave me words for things I was already doing—but it also challenged me to be more intentional about them.
His call to build guardrails, not gates… to prioritize reuse over heroics… to measure and iterate... all of it resonates more deeply the longer I’ve been in this space.
So here’s my version of his model in practice:
Context: Understand what your org actually needs, not what LinkedIn or Twitter’s yelling about.
Strategy: Invest in models, processes, and documentation that outlive you.
Execution: Move fast when needed—but reflect hard and revise often.
Measurement: Let the data from real incidents teach you how to improve.
If you’re a security engineer, a detection engineer, a threat hunter, or even just starting, this is your reminder that speed and security don’t have to be enemies.
They can be collaborators if you give them structure.
Keep building paved roads.
And when you do, leave signs behind for the next engineer.








Another great read, Day. Just started out as a SOC analyst and this was a great nugget to keep in mind as I progress in my career. Thanks!