December 9, 2025
Greg Reber

Let me tell you about some moments in our collective history when extremely smart people realized they had been confidently wrong about fundamental aspects of reality.
The first one involves a strong belief by the best minds in the world (at the time) in a substance that doesn't exist. For roughly two centuries, scientists believed in something called the "luminiferous aether," which sounds like either a progressive rock band or an artisanal cocktail ingredient, but was actually supposed to be an invisible medium that filled all of space and allowed light to propagate through it. Think of it as the universe's styrofoam packing material. This was mainstream physics, accepted by virtually everyone who mattered in the field. The logic was that waves need something to wave through, and since light was clearly a wave (Newton notwithstanding), there must be this ethereal stuff permeating everything.
Scientists spent decades trying to detect the aether, building increasingly elaborate experiments. Einstein finally killed the aether theory entirely with special relativity in 1905, showing that light didn't need any medium at all. Space was just space, empty and indifferent.
Now, here’s another example that may resonate more with my cybersecurity brethren. Some of us are old enough to remember that, for decades, mandatory password rotation was gospel in cybersecurity. Organizations required users to change passwords every 60 or 90 days, believing this limited exposure if credentials were compromised. The logic seemed airtight, and the practice was embedded in compliance frameworks worldwide.
Then the data arrived. Research revealed that forced password changes actually weakened security. Users responded to frequent password changes by making minor modifications like changing "Password1" to "Password2" or adding exclamation points, so passwords became weaker. Meanwhile, password reset requests consumed Help Desk resources.
In 2017, NIST completely reversed course in Special Publication 800-63B, explicitly recommending against periodic password changes unless there's evidence of compromise. What had been considered essential security became recognized as counterproductive theater. The policy designed to protect systems was actively lowering their security, a perfect example of well-intentioned risk management producing the opposite of its intended effect.
What's striking about both examples is that they were wrong while being completely rational given the available evidence. These weren't failures of the scientific method. They were demonstrations of how the method works: you hold beliefs provisionally, no matter how certain they feel, because reality has a habit of surprising you.
Some theories seem common sensical (yes, I make up words sometimes but you get it), but actual evidence may change the playing field. When someone applies real insight in the form of new data, the whole thing takes on a different meaning. It's rather like being absolutely certain you're watching a horse race from a distance, analyzing the strategies and stamina of each animal with great precision, documenting their performance over years, building predictive models, only to have someone finally hand you a pair of binoculars and discover you've been watching a huge carousel the entire time. The horses were never racing. They were bolted to poles. And they've been going in circles. Your observations were meticulous. Your reasoning was sound. You were just looking at the wrong layer of reality, and there was genuinely no way to know this until the resolution improved.
Which brings me to my point. ‘Traditional’ methods of quantifying risk associated with Common Vulnerabilities and Exposures are now outdated. The market leaders in vulnerability discovery measure their self worth (and their price tags) by finding every theoretically possible CVE. The market leader, (starts with Q and rhymes with Wallace), in this space tells us they have scanning signatures for over 150,000 externally accessible CVEs (AV:N). So, it’s no wonder that most vulnerability management teams are flooded with huge lists of things to fix, and struggle mightily to prioritize them, relying heavily on severity or predictive scoring systems like CVSS and EPSS, since it’s impossible to address them all with limited resources.
The solution to this problem is evidence. I’m talking about data that shines a very, very bright light on the vulnerabilities that have been used in successful breaches. Not theory, but reality. Security teams can make the most reasoned decisions on what to worry about using the surprisingly small subset of CVEs that The Bad Guys have actually used to compromise networks. It’s about 1% of the total theoretical possibilities. I’d say something like “Imagine that”, but you don’t have to.
It’s reality, based on evidence.