Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Article

The True Cause of Cybersecurity Failure and How to Fix It: Part One

By David Kruger (featured on Expensivity.com)

The classic line “I have a bad feeling about this” is repeated in every Star Wars movie. It’s become a meme for that uneasy feeling that as bad as things are now, they are about to get much worse. That’s an accurate portrayal of how many of us feel about cybersecurity. Our bad feeling has a sound empirical basis. Yearly cybersecurity losses and loss rates continually increase and never decrease despite annual US cybersecurity expenditures in the tens of billions of dollars and tens of millions of skilled cybersecurity man-hours. Cybersecurity’s record of continuously increasing failure should prompt thoughtful observers to ask questions like “Why are cybersecurity losses going up? Why isn’t cybersecurity technology reducing them? Are there things we don’t understand or are overlooking?” 

That’s easy to answer: Of course, there are! After spending this much time, money, and brainpower on cybersecurity without managing to decrease losses, much less eliminating them, it’s painfully obvious something isn’t right. 

This article explains what we get wrong about cybersecurity, how and why we get it wrong, and what it’s going to take to fix it. Fair warning: it’s going to be a long and bumpy ride. Those bumps include a healthy dose of counterintuitive assertions, cybersecurity heresy, and no mincing of words.

The Heart of the Matter

When confronted with a chronic problem, we human beings are prone to err by trying solutions without first asking the right questions. We tend to ask, “How do we stop this now?” and fail to ask, “What’s causing this?” Then, we are shocked when our fixes don’t last. This tendency is so common that safety engineers developed a formal analytical method called a root cause analysis to prevent this error. Root cause analysis is designed to find unidentified causes of recurring failure.  A root cause analysis starts with an effect, in this context, a failure, and works upstream all the way through the chain of causation until the root cause is found. In complex systems like computers, finding the root cause of failure is critically important because an unidentified root cause makes multiple downstream elements of the system much more prone to fail. You can tell when you’ve found the root cause, because if you fix it, the downstream recurring failures cease. 

Identifying the root cause in complex systems can be hard because: 

  • A single root cause can spawn multiple instances and types of failure, because a single root cause can spawn multiple chains of cause and effect. The chains can be long, having many intermediate cause and effect links between the root cause and the failure. The more links in the chain, the longer the “distance” between the root cause and the failure. Long chains branch and intersect with other chains, which makes it even more difficult to identify the root cause.
  • Usually, the longer the distance is between an unidentified root cause and the failures it’s causing, the harder the root cause is to identify. The shorter the distance between an intermediate cause and the failures, the easier the intermediate cause is to identify. Intermediate causes are obvious, unidentified root causes are not—and that’s why root causes are so often overlooked.  

Because of these difficulties, problem solvers can easily fall prey to the symptomatic solution fallacy, a mistaken belief that solving intermediate problems can permanently stop long distance failures.  It’s called the “symptomatic” solution fallacy because it’s the engineering equivalent of a doctor believing that a treatment is curative when it only temporarily alleviates symptoms of an undiagnosed chronic disease. For example, a dose of pain medication can temporarily alleviate suffering, but it can’t cure the cancer that’s causing the pain. 

To see how root cause analysis aids in finding and fixing unidentified root causes, we’ll review a common real world root cause analysis and then take the lessons learned and apply them to cybersecurity technology and then to cybersecurity policy in Part 2. 

Root Cause Analysis 101

The purpose of automaker safety recalls is to prevent recurrent failures attributable to a previously unidentified root cause. Recently, 700,000 Nissan Rogue SUVs were recalled because: 

“In affected vehicles, if water and salt collect in the driver’s side foot well, it may wick up the dash side harness tape and enter the connector. If this occurs, the dash side harness connector may corrode and possibly cause issues such as driver’s power window or power seat inoperative, AWD warning light ON, battery discharge, and/or thermal damage to the connector. In rare cases, a fire could potentially occur, increasing the risk of injury.”

Lesson Learned 1. A root cause analysis, and ultimately the recall, was initiated by the automaker because it observed a pattern of multiple types of recurring failure that appear to be related, in this case multiple types of electrical failures.

Lesson Learned 2. From the perspective of the driver, if your power windows or seats stop working, or your car won’t start because the battery is dead or wiring in the dashboard of your 2014-2016 Nissan Rogue catches fire, it’s apparent that the problem is electrical. The root cause analysis revealed that the closest cause to these electrical failures was obvious, a corroded wiring harness connector. 

Now, imagine the automaker had identified the wiring connector as the root cause and declared that replacing it was a permanent fix. It would soon be evident that the automaker had fallen prey to the symptomatic solution fallacy because replacing the connector would not be a permanent solution. The still unidentified and unfixed root cause would cause the replacement connector to corrode again, which, in turn, would cause one or more of the related failures to recur.  

After a fix has been applied, if related failures continue recurring, it’s evident that an intermediate cause was erroneously identified as the root cause.

Lesson Learned 3. Working the chain of causation backwards, the automaker deduced the cause of corrosion was exposure to moisture and a corrosive. What was the source? They deduced that the wiring harness tape wicked moisture and salt up to the connector, but where did the water and salt come from? They deduced the wiring harness was being wetted as it traversed the footwell. 

The potential presence of water and salt in the footwell of an SUV is a known operating condition. A given vehicle may or may not encounter salt and water during its lifetime, but it is a known potential operating condition for all SUVs. The automaker neglected to take this known operating condition into account when selecting the routing and the physical characteristics of the tape used to wrap the wiring harness. Therefore, the root cause of failure is that the automaker neglected to compensate for a known operating condition in its design. Note that this finding is axiomatic; truly unforeseeable root causes are rare.

In complex systems, it is axiomatic that recurring failures attributable to a previously unidentified root cause nearly always result from neglecting to compensate for known operating conditions in the design.

Lesson Learned 4. Now that the root cause had been identified, the automaker conducted a requirements analysis to clarify operating conditions, needs, and goals of the fix, and then redesign to compensate for overlooked operating conditions to minimize their and their customers’ risk and expense.

Lesson Learned 5. Since the automaker neglected to compensate for a known operating condition—potential exposure of an SUV to water and salt—in their design, the automaker is responsible legally, financially, and morally, for fixing the affected vehicles and making certain that the overlooked operating condition is compensated for in the design of all future models.  

Summary of Root Cause Analysis Lessons Learned:

  • A pattern of multiple types of recurring related failures indicates the presence of an unidentified root cause.
  • If repeated fixes fail to stop recurring failures, it indicates fixes are being applied to intermediate causes (symptoms), rather than to the root cause.
  • It is axiomatic that neglecting to compensate for a known operating condition in the design is nearly always the root cause.
  • To fix the root cause, a redesign compensating for the overlooked operating condition is required.
  • The designers neglected to compensate for a known operating condition, therefore, they are responsible for fixing existing and new designs.  
What’s Wrong Cybersecurity Technology?

Now we’ll apply the lessons learned above to cybersecurity:

Lesson Learned 1: A pattern of multiple types of recurring related failures indicates the presence of an unidentified root cause.

In cybersecurity, is there a pattern of multiple types of recurring failures that appear to be related? Yes! A cybersecurity failure occurs whenever a cyberattacker gains control of data and then: 1) steals copies of it, 2) ransoms it, 3) impedes its flow, 4) corrupts it, or 5) destroys it. The lesson learned is that the target of cyberattacks isn’t networks, computers, or users; they are vectors (pathways) to the target—gaining control of data. 

Lesson Learned 2: If repeated fixes fail to stop recurring failures, it indicates fixes are being applied to intermediate causes (symptoms), rather than to the root cause.

In cybersecurity, is there evidence of the symptomatic solution fallacy? In other words, is there a history of applying fixes to recurring related failures only to have the failures continue to occur? The answer is an emphatic yes. Successful cyberattacks keep on happening.

Groundhog Day

Why aren’t symptomatic solutions able to permanently solve cybersecurity failures?  Because it’s mathematically impossible for them to do so.  Don’t take my word for it; you can prove it to yourself with a simple thought experiment. 

Compute “total cyberattack potential:”

  • Identify vulnerabilities: Identify every type of user, hardware, software, and network vulnerability that can be exploited to gain control of data. To provide some scope, there are currently nearly 170,000 publicly disclosed cybersecurity vulnerabilities with new ones being discovered all the time. 
  • Count vulnerability instances: Add up the total number of users, networks and instances of software and hardware that have the vulnerabilities identified in step 1.
  • For every vulnerability instance, identify and count every vector or combination of vectors a cyberattacker can take to exploit the vulnerability.
  • Multiply vulnerabilities by their vectors to get “total cyberattack potential.”

Now compute “total cyberdefense potential:”

  • Identify every currently available type of defense, including technological defenses and human defenses such as cybersecurity training and education.
  • Subtract unerected defenses due to apathy, ignorance, or a lack of trained personnel, money, or time. 
  • Subtract unerected defenses that don’t yet exist due to the lag time between discovering a vulnerability and developing a defense for it.
  • Subtract unerected defenses arising from vulnerabilities known to cyberattackers but unknown to cyberdefenders.
  • Subtract properly erected defenses that cyberattackers have learned to defeat.
  • Subtract defenses that fail because they were improperly implemented.

It is easy to see that there is far more total attack potential than defense potential, but we’re not nearly finished. 

  • Factor in that cyberwarfare is immensely asymmetrical. If a cyberdefender scores 1,000,000 and a cyberattacker scores 1, the cyberattacker wins.
  • Factor in that the rate of asymmetry grows as the number of connected devices grows. Defense potential grows linearly since symptomatic point solutions are implemented individually, whereas, attack potential grows exponentially due to network effect. Think of an ever-expanding game of Whac-A-Mole where new holes and moles appear faster and faster, but kids with mallets only appear at a constant rate and you’ve got the picture. That tends to make cybersecurity successes temporary, as in unable to guarantee success against tomorrow’s attack even if successful today. For example, say someone at your company buys a smart refrigerator. Later, via a new smart refrigerator exploit, the refrigerator, which your company has no control over, is the initial vector that ultimately results in the theft of company intellectual property. The refrigerator, a single node added to an employee’s home network, negates the efficacy of all the company’s point solutions even if they all worked perfectly, not to mention diminishing the value of prior cybersecurity expenditures. 
  • Factor in that cybersecurity is truly democratic; the enemy gets a vote. Cyberattacker strategies, tactics, target valuations, and target selections are based on their cost-benefit analysis, not yours.
  • Finally, factor in that defense is far more expensive than attack with respect to time, money, and trained personnel because it’s much easier to automate and distribute attacks than defenses. A relatively small number of cyberattackers can create work for a much larger number of cyberdefenders.

Accordingly, it’s not possible to calculate risk or a credible return on investment for implementing symptomatic point solutions. In its simplest formulation, risk = likelihood x consequences. It’s not possible to calculate the likelihood of being successfully cyberattacked because it’s not possible to know what exploitable vectors and vulnerabilities remain unprotected after implementing symptomatic point solutions.  

In a successful cyberattack, the attacker has control of your data, so it’s impossible to predict the consequences. You can’t know with certainty what they are going to do with your data, nor can you know with certainty how much third parties like customers, courts, and regulators might penalize you for failing to keep cyberattackers from gaining control of your data. So, when a symptomatic point solution provider claims that buying their stuff will reduce your risk or provide a quantifiable return on investment, it’s meaningless marketing hype. That being said, at the present, symptomatic point solutions do provide a benefit by preventing some unknowable number of cyberattacks from succeeding. However, they are by their nature mitigative, not curative. 

In summary:  

  • Today’s multibillion-dollar cybersecurity industry is based on a symptomatic point solution fallacy.
  • Organizations and individuals can’t implement a sufficient number and variety of symptomatic point solutions quickly enough to achieve anything approaching a permanent solution.
  • The aggregate efficacy of symptomatic point solutions cannot be quantified or predicted, so return on investment cannot be calculated.
  • Symptomatic point solutions are of inherently limited efficacy, and while they are currently necessary, they can only be stopgap measures. As a result, cybersecurity success based on symptomatic point solutions is a crapshoot.  

Lesson Learned 3: It is axiomatic that neglecting to compensate for a known operating condition in the design is nearly always the root cause.

We know that cybersecurity failure is the result of a cyberattacker gaining control of data and doing things with it that its rightful owner didn’t intend. That makes it clear that there is something about data that permits cyberattackers to gain control of it, so deduction starts by asking “What are the relevant properties of data, and how is it controlled?”  

Necessary Ingredients 

Data in this context is digitized information. Digital information is physical, as in, it’s governed by the laws of physics. Data is the result of software converting (digitizing) human usable information into patterns of ones and zeros that are applied to “quantum small” physical substrates: microscopic transistors, electrical pulses, light, radio waves, magnetized particles, or pits on a CD/DVD. 

The nomenclature can be a bit confusing. Files, streams, centralized databases, decentralized databases (blockchains), and software are all forms of digitized information. Software (or “applications”) is the generic name we give to digitized information that performs work on other kinds of digitized information. The digitized information that software performs work on, that is, it creates, processes, stores, and transports, is generically referred to simply as data. Software is accurately understood as a manufacturing process because it is a physical mechanism that creates data, uses data as a feedstock to produce new data, and manages data in storage and shipment. 

It is important to note, especially when we get to cybersecurity policy, that human beings, contracts, laws, regulations, treaties, righteous indignation, and wishful thinking can’t directly control data—software, and only software, can do that. 

It’s impossible for human beings to directly control the creation, use, storage, and transport of data, only software can do that. Therefore, to be effective, policy must be enforced by software.

Once Upon a Time

When information was first digitized in the early 1950s, the community of people with computers was tiny, known to each other, and most had security clearances. Security was not an operating condition that software makers had to compensate for in their design. Consequently, data was designed with only two components: digitized information (the “payload”) and metadata (information about the payload)—a name and physical address, so software could retrieve existing data and work on it. This two-component data format is intentionally open, that is, it is inherently accessible. That’s a mouthful, so we’ll give the two-component data format a simple name: “open data.”

Fast forward to the Internet. Suddenly, n number of copies of open data can be made and transported anywhere by anyone at any time, processed by any compatible instance of software installed on any compatible device, and every one of those copies is also inherently accessible because the data is open. Open data has no attributes that support constraining who, on what devices, when, for how long, where, or for what purposes it can be used, and no attributes that support tracking, managing, or revoking access once it has been shared. There are also no attributes in open data that support knowing who the data belongs to, what its purpose is, where it’s going, or where it’s been. The original instance and every single copy of open data in storage and in transport is inherently accessible and therefore, available for cyberattackers to control.

Not only can a cyberattacker in control of open data do whatever they want to with it, there is no way to see what they are doing with it or stopping them from doing it. 

The form of data is as software makes it to be. With exceedingly rare exception, software still produces open data by default—and therein lies the problem. 

It’s no coincidence that the first recorded use of the word cybersecurity was in 1989, the year the commercial Internet was born.

Clear and Present Danger

Open data is inherently hazardous. A hazard is any physical thing or condition that has the potential to do harm. Harm can be physical, emotional, or financial. Data isn’t generally understood to be a physical hazard akin to a toxic chemical or a faulty bridge over a deep gorge because humans aren’t able to directly perceive data, manipulate it, or assess its condition. However, when quantifying how hazardous a thing is, the form and size of the thing or how it operates is irrelevant. 

The sole determiner of how hazardous a thing is, is the harm it causes when it’s not adequately controlled.

By the normal definitions of hazardous and harmful, can there be any doubt that open data is hazardous and when cyberattackers gain control of it, it’s harmful? 

  • Is open data under the control of cyberattackers doing hundreds of billions of dollars of financial harm every year? Yes.
  • Is it causing human beings endless grief and misery? Yes.
  • In an increasingly digitally controlled physical world, can open data inflict grievous bodily harm or death? Yes. In his book, “Click Here to Kill Everybody” world renowned cybersecurity expert Bruce Schneier summarizes potential physical harms this way:

“The risks of an Internet that affects the world in a direct physical manner are increasingly catastrophic. Today’s threats include the possibility of hackers remotely crashing airplanes, disabling cars, and tinkering with medical devices to murder people. We’re worried about being GPS-hacked to misdirect global shipping and about counts from electronic voting booths being manipulated to throw elections. With smart homes, attacks can mean property damage.  With banks, they can mean economic chaos. With power plants, they can mean blackouts. With waste treatment plants, they can mean toxic spills. With cars, planes, and medical devices, they can mean death. With terrorists and nation-states, the security of entire economies and nations could be at stake.”

Given its vast destructive potential, open data may be the most hazardous thing mankind has ever created.

Lesson Learned 3 states “It is axiomatic that neglecting to compensate for a known operating condition in the design is nearly always the root cause.” What missing known operating condition has been neglected? Continuous unrelenting cyberattack. Yet software makers continue to produce open data as if we were still living in the 50s, and the Internet had never been invented.

So, what is the root cause of cybersecurity failure? 

The root cause is software makers neglecting to incorporate a known operating condition, continuous unrelenting cyberattack, into the design of data and the software that makes and manages it. 

The root cause is not cyberattackers; they are merely opportunists taking advantage of the open data condition. 

Lesson Learned 4. To fix the root cause, a redesign compensating for the overlooked operating condition is required.

Now that we have identified the root cause, we can formulate the top-level engineering requirement needed to fix the problem: 

Conditions:

  • Data is hazardous
  • Cyberattack is continuous and unrelenting
  • Harm is done when cyberattackers take control of data

Needs:

  • Data owners shall be able to control their data
  • From the moment it’s created until the moment it’s destroyed
  • Whether it’s shared or unshared
  • Whether it’s the original or a copy
  • When it’s in storage, in transit, or in use

Goals:

  • The solution shall be least cost/least time to implement 
Safety Notice

Notice that even though the topic is cybersecurity, the conversation has shifted towards safety. Safety is the more appropriate way to frame the engineering tasks at hand. Safety and security overlap, but security is reactive; it is oriented towards repelling attacks by erecting defenses. Safety is proactive; it is oriented towards preventing harm by containing and controlling hazards. Safety is the ounce of prevention; security is the pound of cure.

Put a Lid on It

Fortunately, we have at our disposal untold millions of man hours of safety engineering focused on safely extracting benefits from the use of hazardous things. For example, our homes and the highways we travel on are chock full of beneficial things that can easily kill us, such as high voltage electricity, flammable/explosive natural gas, and tanker trucks filled with flammable or toxic chemicals driving right next to us. These very rarely do us harm because the hazards are contained in storage and in transit, and their usage is controlled. Containment keeps hazardous things in until they are released for use. Controls enable hazardous things to be used safely. 

Containers and controls enable the safe use of hazards thing.  If you are familiar with propane grills, think of the tank, tank valve, pressure regulator, and burner knobs. They are each engineered to safely extract a specific benefit—delicious grilled food—from highly hazardous propane. The tank is the container which safely contains propane in storage and in transport. The tank valve and pressure regulator are system controls. Even if the tank valve is opened, gas won’t flow, because a safety mechanism in the valve constrains the flow of gas unless a pressure regulator is properly attached. The pressure regulator constrains the flow of gas to a specified maximum volume and pressure. The burner knobs are user controls. They enable the user to instruct the grill to operate within a user-specified temperature range. So, a universal design principal for systems intended to extract a benefit from the use of a hazardous material can be formulated as follows: The hazardous material shall be safely contained until it’s put into use, the user shall be provided controls for extracting the specified benefit from use of the hazardous material, and system controls shall enable the user’s instructions to be carried out safely. How does this apply to the problem of open data? 

Data is physical and hazardous, therefore, the only way to use it safely is to contain it when it’s in storage and in transit and control it when it’s in use. 

Data can be contained with strong encryption. If a cyberattacker gains control of strongly encrypted data but has no access to its keys, the attacker can’t get it out of containment and do harmful things with it. When continuous unrelenting cyberattack is a known operating condition, there is no good reason to not encrypt all data by default the moment it is created, and from then on, only decrypt it temporarily for use. Only a tiny fraction of data is intended to be public. If you are its rightful owner, you can decrypt it and make it public whenever and wherever you choose. Can software encrypt data by default? Of course, it can. It’s known art.

Shot Caller

The first principle of controlling data is that control must be continuous. Data is distributed by making copies, and the copies can be processed by every compatible instance of software in existence. Therefore, the original and every copy must be accompanied by its user’s instructions. If those instructions don’t accompany the data, the recipient of the data, licit or illicit, can do whatever they want with it, and we are back to square one—open data. 

The second principal of control is that each instance of data must have a unique, verifiable identity to support updateability and auditability. User instructions may need to be updated, such as changing access to data. The unique, verifiable identity supports traceability, usage logging, and proof of ownership, which means that the creation, distribution, and use of data can be fully auditable.   

To accomplish this, software must make and manage a third data component. Open data has two components, the payload and metadata. The third component is instructions. When software takes the data out of containment, it consumes the data owner’s instructions and carries them out. When software shares two-component data, data owners are at the mercy of whomever is in control of the copy. When software shares three-component data, each copy acts as a dynamic proxy for the owner; it carries with it the data owner’s will and choices and can be updated and audited as needed. For brevity, we’ll call three-component data that is encrypted by default “controllable data.”

Controls provided by software enable data owners to instruct the system how their data can and cannot be used.  To use data safely, the minimum controls are: 

  • Authentication Controls. Authentication determines who and what may temporarily decrypt data for use. A user must authenticate themselves to use their own devices safely, but when connecting their device to another device with which data will be shared, it is unsafe to authenticate the user only. Here’s why:

To do work, computers require three physical actors working in unison: 1) a user issuing instructions to, 2) an instance of software installed on, 3) an instance of hardware. 

Cyberattackers only need to compromise one of these three actors to take control of data. Without consistently authenticating the user, instance of software, and instance of hardware requesting to connect, it is not possible to be certain who or what is on the other end of the line. Because each actor has unique physical characteristics, each combination of user, instance of software, and instance of hardware can be cryptographically authenticated. This process can be automated and made invisible to the user. It’s known art. We’ll refer to authenticating the user, instance of hardware, and instance of hardware as “full-scope authentication.” 

  • Temporal Controls. Most data has a usable life (isn’t intended to last forever), so data owners need to be able to control when and for how long their data can be used, and revoke access to shared data when recipients no longer need it.
  • Geographical Controls. There are many use cases where data can only be used safely within specified physical or virtual locales. For example, physical location controls enable use only within a specified country. Virtual location controls enable use only within a specified organization’s network.
  • Intended Use Controls. Usage controls constrain data to specified uses. For example, software can use data for purpose A, B, and C but not for purpose X, Y, or Z. Intended use controls can be customized for specific use cases, such as turning off a recipient’s ability to forward data to others or to export it from the controlling application. Intended use controls can be set directly by the user or they can be imported. When data is shared with a trusted third party, pre-agreed upon intended use controls can be imported from the third party and applied to the user’s data, and the software will objectively manage the use of the data for both parties.
It Wasn’t Me

Cyberattackers make a handy scapegoat. They provide endless revenue opportunities for symptomatic point solution providers and shift responsibility away from software makers, but the fundamental mistake was ours; we allowed open data to metastasize throughout the connected world. For the reasons explained above, it is not possible to cure our open data cancer by treating its symptoms with a couple of aspirin, a few dabs of antibiotic cream, and some bandages. 

A hard truth about our current cybersecurity crisis is that we did this to ourselves.

We got into this mess one piece of software and data at a time, so we’ll have to get out of it one piece of software and data at a time. 

Agile Software Development, Known Art, and Updates to the Rescue

The “get out of it one piece of software and data at a time” requirement sounds daunting, if not impossible, but it isn’t as bad as it sounds due to agile software development, the availability of “known art,” and the speed at which large-scale software changes propagate via the Internet. 

A key attribute of agile software development is frequently releasing incremental improvements at short intervals, which is why we all experience a constant stream of software updates and patches. It is utterly routine for software makers to implement small to very large-scale changes to tens of millions of instances of their software overnight. To speed new capabilities to market, agile development relies heavily on prepackaged code developed by third parties, especially for functions that are common to all software, and that span across differing software architectures and programming languages. Creating, storing, transporting, and processing data are common to all software. The phrase “known art” above and below means there are multiple sources of prepackaged code that can enable the shift to controllable data to be quickly implemented in existing and new software. The key point is this: 

No new technology must be invented to shift software from creating open data to creating controllable data.

As a person whose first professional software development job in 1986 was to design and build accident analysis software for transportation safety experts, and who has been working with software developers ever since, I do not want to trivialize the amount of work required to shift the digital world from open data to controllable data and from partial authentication to full-scope authentication. It will cost tens of billions of dollars and millions of man-hours of software development labor, and it will take years to fully accomplish. However, the cost of fully implementing controllable data and full-scope authentication is a fraction of the cost of continuing to produce open data and partially authenticate. 

Left untreated, the total cost of cybersecurity failure (symptomatic point solution costs + cybersecurity losses) will continue to increase but shifting to controllable data and full-scope authentication will sharply reduce both costs over time. To be sure, there will be initial and ongoing costs, but once initial implementation labor cost is paid, operating costs decrease and level out. Nonetheless, getting software makers to change their priorities to making their products safe rather than rolling out the next cool new feature will by no means be easy. However, when the diagnosis is fatal-if-left-untreated cancer, one should expect their priorities to change while treatment is underway.

Results of Implementation

Since this is a “big picture” article, the items in the list below are necessarily assertions without supporting technical detail. However, these results are not speculative, having been achieved in well-tested commercial software:  

  • Controllable data can only be decrypted by authenticated users
  • Controllable data can only be used for the purposes its owner permits
  • Stolen controllable data is unusable
  • Remote cyberattackers can’t authenticate at their destination
  • Malware can’t attach to software
  • Stolen user credentials don’t grant access
  • Stolen or cloned devices don’t yield usable data.
Ruining the Economics of Cyberattack

Would fully implementing controllable data and full-scope authentication prevent every cybersecurity failure? Of course not. There are scenarios, particularly those aided by human gullibility, ineptitude, and negligence, where cybersecurity can and will continue to fail. However, cyberattacks are carried out by human beings for the purpose of acquiring money and/or exercising power, and there is a cost/benefit analysis behind every attack. Controllable data and full-scope authentication, even though imperfect, increases the cost of illicitly gaining control of data by several orders of magnitude, thereby significantly diminishing the motivation to attack—and that’s the point.

Programming Ethics

The staff and management of many software makers are completely unaware of the inherent hazardousness of open data and partial authentication and their causal link to preventable cybersecurity harms. Many are genuinely committed to programming ethics, but their concept of cybersecurity is based on the symptomatic point solution fallacy. The fallacy is continually reenforced by their professors, peers, textbooks, trade publications, and endless articles about cybersecurity, most of which lead with images of a scary faceless hooded figure hunched over a keyboard—the dreaded cyberattacker. It would be unreasonable to hold them responsible for believing what they’ve been taught, especially given that symptomatic point solutions actually do thwart some cyberattacks; they’re just inherently insufficient. That being said, once staff and management understand that cybersecurity failure is caused by software design, not cyberattackers, many professing adherence to programming ethics will have some hard decisions to make.

Continue to Part Two – Cybersecurity Policy