Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Article

The True Cause of Cybersecurity Failure and How to Fix It: Part Two

By David Kruger (featured on Expensivity.com)
Cybersecurity Policy

When it comes to fixing a root cause, there are two questions. The first is “Who is able to apply the fix?”, and the second is “who is responsible for applying the fix?” The “who is able” question is about engineering because it’s about redesigning an engineered process. That was the subject of Part One —Cybersecurity Technology.

“Who is responsible” is about policy because the responsibility for preventing harm and assessing liability for failing to prevent harm is decided by policymakers, that is, by legislators and regulators. The role of policymakers is crucial if the strategy of those causing preventable harm is to evade their responsibility. That’s the subject of Part Two—Cybersecurity Policy.

The first question was answered earlier: Only software makers can apply the fix because data is the hazard, and the form of data is as software makes it to be. Logically, you would expect the answer to the “Who is responsible for applying the fix?“ to be “Obviously, software makers are responsible because 1) their product is causing preventable harm, and 2) they are the only ones able to fix it.” That entirely reasonable expectation would be buttressed by the fact that essentially every other kind of manufacturer of potentially harmful things, such as planes, trains, automobiles, chemical plants, pharmaceuticals, mining and pipeline equipment, children’s toys, and electrical appliances, are all held responsible and liable for their design shortcomings when they cause preventable harm. 

Unfortunately, perhaps tragically, policymakers aren’t holding software makers responsible for the preventable harms they are causing because policymakers too are caught up in the symptomatic point solution fallacy. In Part Two, we are going to focus on examining software maker motives, evasion tactics, and preventable harms resulting from impeding the flow of data, and finish with policy recommendations and a look towards the future. Hold on tight – this long and bumpy ride is about to get rougher.

Close Encounters of the Third Kind

We have been taught to think of cyberattackers as being one of two kinds, criminal cyberattackers who gain control of others’ data to make money, or military/terroristic cyberattackers who gain control of others’ data to project military or political power. There is a third kind: Software makers who systematically destroy privacy, so they can gain control of as much “human data” as they possibly can.

Human data in this context is defined as the totality of all data about a specific person that can be gleaned from digital sources. This third kind of cyberattacker collects as much human data as possible because it is the “raw material” on which their business is based. We’ll call this third kind of cyberattacker “human data collectors” or HDCs for short. 

HDCs include the world’s largest software makers—Google, Facebook, Microsoft, Amazon, and Apple—so-called “big tech”—followed by an enormous number of smaller players and a vast supporting ecosystem. HDCs are categorized as “cyberattackers of the third kind” because they are technologically, methodologically, motivationally, and morally identical to criminal and military/terroristic cyberattackers.

  • Technologically, all three kinds of cyberattacker succeed by gaining control of others’ data. 
  • Methodologically, all three kinds of cyberattacker lie, inveigle, and deceive to gain control of others’ data. 
  • Motivationally, all three kinds of cyberattacker gain control of others’ data to make money, project power, or both.
  • Morally, all three kinds of cyberattacker are indifferent to the harms they know they are causing.

 The technological goals, methods, motivations, and morals of all three kinds of cyberattacker are known operating conditions that policymakers must compensate for in the design of their policies. 

Lie, Inveigle, Deceive

At any given moment, HDCs around the globe, especially “big tech” HDCs, are embroiled in hundreds of lawsuits brought by individuals and governments. They are accused of bad conduct that includes an astounding array of privacy violations, deceptive and unfair trade practices, price-fixing, anticompetitive behavior, violation of antitrust statutes, censorship, breach of contract, human resources violations, defamation of character, collusion, conspiracy, copyright infringement, patent infringement, and intellectual property theft. Collectively, HDCs have paid out billions of dollars in fines, penalties, settlements, judgments, and punitive damages. You would be hard pressed to find anyone knowledgeable of HDCs’ practices, other than their attorneys and publicists, who would assert they are of high integrity and are trustworthy.

The primary difference is that criminal and military/terroristic cyberattackers are outlaws, whereas HDCs operate as if they are above the law. HDCs will strenuously object to being characterized as cyberattackers, but if it looks like a duck, walks like duck, and quacks like a duck . . .

It’s All About the Benjamins

Why are HDCs so willing to abuse their own users? For the money and the power that comes from having lots of it. In 2002, Google discovered that the raw human data it was collecting from its users to increase the quality of the user experience could be repurposed to deliver targeted ads, that is, ads delivered to an individual’s screen in real time based on what the individual was currently searching for, and those ads could be repeated, called ad retargeting. That capability turned out to be astoundingly lucrative. As of February 2021, Google’s market capitalization was approximately 1.4 trillion US dollars, and about 85% of their revenue comes from advertising. About 95% of Facebook’s revenue comes from selling ads.

That’s No Moon

Knowledge really is power, and HDCs act as gatekeepers to the sum of all digitized surface web content plus the sum of all the digitized human data they have collected to date. That’s a concentration of power never before seen in human history. Let’s take a closer look at current preventable harms enabled by that concentration.

Spilt Milk

HDCs are creatures of open data; they could not have come into existence, or continue to exist in their current form, without it. Their internal use of open data and dependence on symptomatic point solutions have resulted in multiple preventable harmful breaches of user personal information, and it is unreasonable to project that such breaches have come to an end.  Future preventable breach harms are expected. 

Free Spirit

In the list of cybersecurity failure types described previously, impeding the flow of data is not well understood. Usually, it’s defined only as disrupting the flow of data such as happens in a denial-of-services attack. Another more insidious, and arguably more harmful, impedance is distorting the flow of information.  

The ideal of the early Internet was to be the world’s public library, one that would provide near instantaneous and unrestrained access to the sum of all information available on the surface web (with one notable universal exception—child pornography). 

Nobody expected that the information on the new-fangled world wide web would be completely accurate, truthful, and non-contradictory. Why? Because truth, lies, mistakes, misinformation, disinformation, bias, liable, slander, gossip, and the means to broadcast it to enormous audiences existed (gasp) before the Internet. A vital characteristic of a free society, pre-Internet and now, is that people 1) have the right to unimpeded access to public information, 2) are responsible for their own due diligence, and 3) are free to arrive at their own conclusions. Distorting the flow of public information diminishes each, and harms individuals and society as a whole.

Nudge, Nudge, Wink, Wink

Ads are a mix of useful to useless and entertaining to irritating, but nonetheless, producers have a legitimate need to market to their prospects. Advertising and persuasive marketing copy is neither illegal nor immoral. Targeting and retargeting ads based on real-time human behavior provided advertisers with a genuinely new capability, explained below by Shoshana Zuboff in “The Age of Surveillance Capitalism” (reviewed by Expensivity here):

“Advertising had always been a guessing game: art, relationships, conventional wisdom, standard practice, but never “science.” The idea of being able to deliver a particular message to a particular person at just the moment when it might have a high probability of actually influencing his or her behavior was, and had always been, the holy grail of advertising.”

However, Google and other HDCs didn’t stop there—and therein lies the problem. 

Google, followed shortly by Facebook and others, discovered that, for a given individual, the greater the volume and diversity of raw human data they can collect and the longer they can collect it, the more effectively the data can be used to slowly and surreptitiously use algorithmic nudging to change the user’s beliefs and behaviors. In other words, HDCs treat human beings as perpetual guinea pigs in an endless and thoroughly unethical experiment by using software designed to learn how to manipulate their user most effectively. This is unethical because the intent of the HDC’s is to use its software to diminish personal autonomy, and they hide their intentions from their user for the most obvious of reasons: If the user becomes aware of how they are being manipulated and for what purposes, they’d likely be angered and demand that the manipulation stop. 

In addition to nudging, since users see more ads the longer the stay logged on, HDCs began using their newfound user manipulation capability to addict users to their software. Details about the mechanisms of addiction are not within the scope of this article, but most rely on presenting information and controlling its flow in a manner designed to generate and reenforce a dopamine hit or to amplify negative emotions such as fear, anger, envy, guilt, revenge, and lust. HDCs’ algorithmic nudging and intentional addiction are increasingly understood to be harmful to individuals and society at large, as attested by numerous studies and whistleblower testimony. HDCs are keenly aware of the harm, but it hasn’t stopped them.  

 Advertising isn’t the problem; user manipulation via surreptitious algorithmic nudging and intentionally addicting users is.

 The ability to manipulate users for one purpose is the ability to manipulate users for any purpose.

Off Target

The promise and purpose of search technology is that with it a user can find what they are looking for, not what the search engine provider deems worthy of being found. That creates an inherent conflict of interest when search providers such as Google are able to increase their ad revenues by distorting the search results delivered to users. Distortion, in this context, is defined as arbitrarily differentiating search results between users, changing their order, and/or withholding results for the purpose of changing user’s beliefs or behavior. The distortion of search results, whether under the guise of “helping users to make good decisions” or selling advertising is still distortion. The quid pro quo of distorted search is: “You give us all your human data, and we’ll use it to decide what we think is best for you to know.” Such distortion is enabled by enormously complex search algorithms that are claimed as trade secrets. The use of complex algorithms is not the problem, holding them secret is.  

 When search results are distorted and search algorithms are held secret, the user cannot know how search results are being used to manipulate them.

A Day at the Races

Another manifestation of coupling advertising rates to user manipulation is Search Engine Optimization (SEO). In horse racing, a “tout” is a person who charges bettors for inside information about upcoming races. Touts are good for racetrack owners because people who pay for their knowledge are likely to bet more often and in larger amounts, especially if the tout really does facilitate the occasional win. 

That’s a pretty good description of the Search Engine Optimization (SEO) business—they are touts for Google’s racetrack. In 2020, SEO cost business about $39 Billion USD and millions of man-hours to produce SEO content. The problem with SEO is not that it is ineffective, rather that it smacks of restraint of trade. The SEO tout/racetrack game is exclusionary. Businesses, especially small business, including the approximately five million US businesses with less than twenty employees, may not have the skill or money to engage in SEO; it’s not cheap. But without paying Google’s touts, they cannot be assured of being found. 

Stage IV Cancer

Thanks largely to Google’s and Facebook’s success, the collection of raw human data for purposes of monetization has metastasized throughout a significant portion of the software-making world. Some HDCs collect raw human data for their own use, but most collect it for resale. There are millions of HDC apps in the various app stores that are surveillance platform first and app second. These smaller HDC software makers sell human data to data brokers, who altogether do about $200 billion a year in human data trafficking. In the last few years, HDC software makers have been joined by some of the world’s largest hard goods manufacturers whose products happen to contain software that connects to the Internet. Examples include automakers, television and other home entertainment device makers, home appliance makers, computer, mobile phone, and tablet makers, mobile device providers, toymakers, and internet service providers, all anxious to cash in on raw human data.

Despite all this, in a fine example of Orwellian doublespeak, HDCs publicly proclaim themselves to be the champions and protectors of privacy while simultaneously hoovering up as much raw human data as they possibly can. They have redefined privacy from “I, as an individual, decide what, when, and with whom I’ll share information” to “We, as a company, will collect every scrap of your raw human data we can, declare it to be company property, do with it what we will, share it with whom we want, guard it from our competitors—and call the whole thing privacy.” When HDCs say, “We take extraordinary measures to protect your privacy!”, what they mean is “We take extraordinary measures to protect our property!” 

Unnecessary Roughness

Many believe that mass raw human data collection is inevitable because advertising supported HDCs must have it to provide their services for free. The HDC value equation has been “For users to benefit from our service for free, we must collect identifiable human information to fund our operation by selling targeted ads.” 

That’s no longer true.

Privacy-enhancing technologies (PETs) that didn’t exist a few years ago are able to extract user attribute data needed to target ads from raw human data without extracting identity information. Software can make such attribute-only data controllable, so we’ll refer to it as controllable attribute-only data. Modern PETs use advances in math to assure that attribute-only data cannot be analyzed to identify specific individuals, and additionally, such analysis can be reliable prevented because the data is controllable. Modern PETs should not be confused with older data anonymization technologies that suffered from recurrent data re-identification problems. 

The advent of controllable attribute-only data has a profound implication that policymakers should factor into their thinking. As before, since this is a big picture article, technical detail isn’t provided for the following assertion, but, like other technologies described above, it’s achievable with existing technology  

  HDCs can be monetized by targeted advertising without collecting raw human information. 

Additionally, there are search engines that:

  • Record zero information about the searcher
  • Do not distort search results
  • Enable users to make their own customizable persistent search filters. In other words, the user controls the search algorithm, not the search engine provider.

The technology to offer privacy-preserving, undistorted, user-controllable search supported by privacy-preserving targeted advertising exists. There is nothing to prevent existing advertising-supported search engines such as Google from “reforming” and ditto for advertising-supported social media. The point is that advertising supported HDCs canreform, but whether they will reform remains to be seen.

These Are Not the Droids You’re Looking For

Before suggesting specific policy fixes, it’s important to understand exactly what policy needs to fix. HDCs have been able to evade responsibility for the preventable harms they cause by 1) blame shifting and 2) arbitrarily transferring risk to their users. 

HDCs blame cyberattackers for problems they themselves cause and only they can cure. They transfer what should be their own risk to their users by presenting them a Hobson’s choice embodied in license agreements. These agreements are filled with legalese so dense that the attorneys who don’t write them have a hard time figuring out what they mean; the general public doesn’t have a chance. So, as a public service, I’ve translated and summarized them here: “You must click Accept, otherwise you can’t use our software. If you click Accept, you acknowledge that you can never ever hold us responsible for anything, and that the raw human data we take from you is our property, not yours, so we can do whatever we want to with it.” 

When a user (or their attorney, or state attorney general, or federal official) complains, HDCs point to the user’s acceptance of the license and declare they aren’t responsible, no matter how egregious the harm. 

Brave Old World

HDCs’ licensing strategy is designed to free them from any vestige of fiduciary duty. Fiduciary law traces its roots back to the Code of Hammurabi in 1790 BC, through the Roman Empire, early British law, and up to the present day. 

The purpose of fiduciary law is to compensate for two sad facts of human nature. In unequally powered business relationships, 1) businesses with more power will abuse customers with less power, and 2) the greater the disparity of power between the business and the customer, the more likely customer abuse will occur if left unchecked. The purpose of fiduciary law is to inhibit customer abuse by assigning the business statutory duties to act in the best interests of their customers. There are many unequal power relationships between many kinds of businesses and customers, so there is an enormous amount of common and black letter fiduciary law for policymakers to draw on. Common fiduciary duties include: 

  • Duty of Care. Businesses have a duty to not harm their customers
  • Duty of Loyalty. Businesses have a duty to not place their interests above the interests of their customers.
  • Duty of Good Faith. Businesses have a duty to act in good faith, meaning they must deal fairly with customers. Examples of acting in bad faith towards customers includes lying to them, using deceptive practices, and shirking their obligations. 
  • Duty of Confidentiality. Businesses have a duty to protect their customers’ sensitive or confidential information.
  • Duty of Disclosure. Businesses have a duty to act with candor, that is, to answer customers’ and regulators’ questions honestly.
A Slap on the Wrist

The high number of government-brought lawsuits against HDCs all around the world, the thousands of pages of laws and regulations designed to reign in HDCs’ bad behavior, the employment of thousands of regulators, and fines, penalties, judgments, and settlements in the billions of dollars make it abundantly clear that policymakers are aware of the harms HDCs are causing.  

However, it is also abundantly clear that policymakers have fallen prey to the symptomatic point solution fallacy in two ways. First, to date, legislation, regulation, and litigation designed to reduce cybersecurity failure has been deterrence based, that is, if you don’t adhere to behavior A, you’ll get punishment B. Just like technological symptomatic point solutions, deterrence policy is an attempt to stop bad behavior (symptoms) instead of eliminating policy design deficiencies that enable HDC bad behavior (fixing the root cause). 

Deterrence-based policy, like its technological symptomatic point solution cousin, is afflicted with a math problem. Deterrence implemented as criminal prosecution or political or military reprisal for successful cyberattacks cannot achieve a high enough ratio of successful prosecutions or reprisals to successful attacks to generate any real fear on the part of the cyberattackers. What miniscule success deterrence policy has achieved is perceived by criminal and military/terroristic cyberattackers as acceptable risk

The same applies to deterrence measures contained within privacy laws and regulations. The ratio of punishments to revenues generated while violating laws and regulations is so low that big tech HDCs absorb them as merely the cost of doing business. Millions to billions of dollars in annual monetary penalties might sting them a bit, but when the aggregate cost of non-compliance is a small percentage of annual revenue offset by charging captive advertisers slightly higher ad rates, they don’t do much of anything. The tens of thousands of small HDCs clogging up app stores tend to be domiciled overseas and too small to be worth prosecuting.

That’s why deterrence has hardly been more than a speed bump to cyberattackers, including big tech HDCs’ drive to acquire all human data and continuing to use it in harmful ways. 

 If the metric for deterrence policy success is the degree to which it has decreased successful cyberattacks, including breaches, human data collection, lying, inveiglement, deception, and user manipulation, it’s had little success.

 The root cause of ineffective policy isn’t insufficient deterrence, it’s allowing software makers to arbitrarily exempt themselves from fiduciary duty and transfer their risk to their users.

A Poke in the Eye

Furthermore, in the domain of unintended consequences, deterrence polices are based on the technologicalsymptomatic point solution fallacy. Businesses are assumed to be negligent if they have a data breach. That’s correct in some cases, but in others, businesses, particularly small and medium-sized businesses, suffer increased compliance costs or have been bankrupted by data breaches that they had no ability to prevent. Basing deterrence policy and penalizing businesses on the mistaken belief that symptomatic point solutions can reliably prevent data breaches makes about as much sense as fining the pedestrian injured in a hit and run because they failed to jump out of the way. 

It is wrong to punish businesses for harms caused by software makers that the business has no way to prevent. Punishing a business for data breaches provably caused by their own negligence is appropriate; punishing them for software makers’ negligence is not. If policymakers can’t tell the difference, they need different policies, policies that are effective and that don’t punish the victim. 

Possession is Nine-Tenths of the Law

The term “raw material” as applied to human data in this article is meant literally. Human data is “raw” at the point of collection. Raw human data has intrinsic economic value, but after it’s further processed by DHCs, its refined value is much higher. Think of an individual’s raw human data as you would crude oil, gold ore, or pine trees. Each has intrinsic economic value; they can’t be taken from a landowner by oil, mining, or lumber companies without the agreement of the landowner.  Raw human data is material because as explained earlier, data is as physical as a brick—it’s just quantum small. 

Wikipedia says that the saying “possession is nine-tenths of the law” means “ownership is easier to maintain if one has possession of something, or difficult to enforce if one does not.” The legal concept of ownership is predicated on an individual’s practical ability to control a physical thing’s use. Control enables possession, and possession codified in law confers legal ownership. 

In law, possession can be actual or constructive. Actual possession means the thing is 

under your sole control. Constructive possession means a third party is permitted to use the thing, but your legal ownership is maintained, and usage is controlled by means of a contract. A simple example is a home that you own as your primary residence (actual possession) and a house that you own but lease to others (constructive possession). Constructive possession is especially relevant to data because data is usually shared by making a copy and transmitting it, not sending the original. Since a data owner would likely retain the original, it’s more appropriate to see shared data as having been leased, not sold.  

 Controllable data enables constructive possession of data when it’s leased to others, and it enables software to objectively enforce both sides of the lease.  

 If users legally own the raw human data that their own digital activities create, it’s reasonable for policymakers to assert that fiduciary duties apply to those who collect it. 

The Easy Button

The most common objection to data ownership is that self-management of owned data is overly complex. That view is based on the complexity of so-called “privacy controls” offered by big tech HDCs, controls which have every appearance of being deliberately obtuse. As a software developer and designer, an industrial safety controls designer, and an IT system administrator, I am acutely aware that privacy controls could be greatly simplified, but they aren’t. Instead, they are hard to find, frequently change locations, get renamed, are vaguely defined, and provide no feedback to verify they are working. That’s either evidence of astonishingly poor design or an intent to convince users that managing their privacy just isn’t worth it. I’m going with the latter.

In fiduciary relationships, the burden of control complexity falls on the fiduciary, not the customer. It is the fiduciary’s duty to reduce complexity because it decreases the chances the customer can harm themselves when using the fiduciary’s product or service. If you open a bank or investment account, there is no expectation that you, the customer, are responsible for logging in to the fiduciary’s software as a system administrator and doing all the complex configurations required for your account, is there? Of course, not. 

As stated earlier with respect to controllable data, “When data is shared with a trusted third party, pre-agreed intended use controls can be imported from the third party and applied to the user’s data.” That technological capability, in conjunction with fiduciary duty, puts the onus of managing complex controls on the fiduciary, not the customer. It’s part of the fiduciary’s duty to disclose in plain language how shared data will be used. That’s readily accomplished with an online portal with a simple user interface enabling such usage to be modified or revoked in accordance with contract terms—the same as we have now with banks and investment accounts. From a design standpoint, there is no reason for owned data shared with a fiduciary, to be difficult to control. 

Clear As Glass

In fiduciary relationships, the ability to inspect what the fiduciary is doing with the assets they are entrusted with is the norm. It has been asserted by some HDCs that such an inspection of a user’s data isn’t possible “because the data isn’t organized that way.” That’s not credible. 

When HDCs collect raw human data for their own purposes, say for targeting ads, it has knowledge of the user that is so granular that it can place ads selected for the specific user, count their ad clicks and views, monitor their movement about the page to gauge attention, store that information and recall it for future trend analysis, and invoice the advertiser for each ad seen or clicked. Given that level of capability and the amount of stored detail held for each individual user, HDC assertions they don’t have the technical wherewithal to disclose the sources, holdings, and uses of information related to a specific user is ludicrous.  Likewise, those HDCs who collect human data for resale must have detailed information about the nature of the data they have collected and who they collected it from in order to value it and invoice the buyer. It’s not credible to assert they can’t disclose the sources, content of the information collected, and who they sold it to. 

The problem isn’t that HDCs can’t produce and disclose the data source, content, and usage information for each user, it’s that they desperately don’t want to. Why? Because if their users see the volume and detail of the information HDCs hold on them and how they are using it, they would likely be stunned, horrified, and angry—and demand that it stops. 

There’s A New Sheriff in Town

Given what we’ve covered, to reach the goals that deterrence-based policy has not achieved, policymakers should consider the following:

  • Apply fiduciary law to software makers, otherwise they will continue to have no compelling reason to think about, much less do anything about the harms their software is causing. 
  • Declare that raw human data is the property of the individual whose digital activities generate it, not the property of the HDCs that collect it. Controllable data makes this more than a legal fiction because it makes actual and constructive possession of personal data possible, provable, auditable, and when shared under contract, objectively enforceable. 
  • With respect to human data collection:
    • If the purpose of software can be fulfilled by consuming controllable attribute-only data, the collection of identifiable human data should not be permitted.
    • If the purpose of software can only be fulfilled by the of collection of identifiable human data, that data must be jointly controllable by the person that produced it and the receiving entity in a manner satisfactory to both.
  • With respect to disclosure:
    • Require that organizations holding identifiable human data disclose to each user the sources of their data, the content currently held, and how their data is used. A well-organized online portal would suffice.
    • To prevent user manipulation, require that organizations holding identifiable human data
      • Provide users a plain language explanation of any algorithmic processing of their data.
      • Allow regulators to inspect the algorithms that consume that data and the data derived from it.
  • With respect to data deletion:
    • HDCs who no longer have a legitimate purpose for holding identifiable human data should make a copy in an organized format available to users upon request.
    • If an HDC no longer has a legitimate purpose for holding a user’s identifiable human data, users should be granted the right to order its permanent deletion. 
Preview of Coming Attractions

If policymakers start to move towards implementing the policies suggested above, there will be a pushback from software makers that are not HDCs. They will be unhappy about additional software development costs, and they will play the “It’s the cyberattackers, not us!” card, saying it’s unfair to hold them responsible for “unforeseeable” cybersecurity failures. Part One of this article was written to refute that argument. 

Non-HDC software makers who license to organizations will have to negotiate defraying software development costs with their customers (the organizations potentially harmed by their product), and most likely, both parties will involve their insurers—and that’s a very good thing. Insurers make their money by turning unquantifiable and unpredictable risk into quantifiable and predictable risk, and when it comes to hazardous manufacturing processes and products and compliance with laws and regulations, they do so by requiring insureds to implement technologies and techniques that are demonstrably effective. Software makers are likely to quickly change their software development priorities if they must do so to retain or obtain insurance coverage. When it comes to cybersecurity risk, working out rational risk sharing and engineering best practices between software makers, their customers, and their respective insurers is long, long, overdue. 

Wall Street, Hoover, Langley, and Fort Meade

The pushback from HDCs, especially big tech HDCs, will be swift, brutal, loud, extremely well-funded, will include hordes of lawyers and lobbyists, and some players you may not expect—Wall Street, and certain elements of the intelligence community and law enforcement.  

Why Wall Street? The market capitalization of HDCs that depend primarily on the unfettered collection of raw human data to generate advertising revenue (i.e., Google, Facebook, and others with similar business models) isn’t predicated on their technology, intellectual property, or their physical plant, it’s predicated on the value of the human data they hold and their ability to continue collecting it. The value of their human data holdings will plummet unless it is continuously “topped up” with raw human information. Why?

Users are dynamic. They are exposed to new information that impacts their current beliefs and behaviors, and it is precisely that malleable nature of human beings that algorithmic nudging exploits. If nudging algorithms are starved of new information, they cease to function in real time. The long experiment ends and the guinea pig reverts to being an unmanipulated human being. The efficacy of manipulative advertising declines and it takes ad rates with it. Remember that prior to discovering that users could be algorithmically nudged and addicted, ad targeting was based on a relatively simple analysis of what users were currently searching for. Without continuous topping up, HDCs will have to revert to that model, and the historical data they hold would quickly lose value.  Furthermore, if policy changes make HDCs liable for breaches of all that obsolete personal data they hold, the data would become a liability, not an asset. Why continue holding it?

The extreme sensitivity of HDCs to the loss of continued real-time, raw human data collection was recently demonstrated by Facebook’s valuation losses after Apple gave users a choice to not be tracked by Facebook, among others. Facebook lost 250 billion dollars of market cap. Wall Street is not favorably disposed towards those kind of shocks, so some are likely to pushback using familiar arguments along the lines of “They must be protected! They are too big to fail!”  

Going Dark

When it comes to cybersecurity, law enforcement and the intelligence community are divided into two camps: those responsible for keeping data safe, and those who want unfettered access to data to protect individuals and the country. The latter group will lobby hard against the root cause fix described in Part One because it requires ubiquitous strong encryption to protect data in storage and in transit. This conflict that has been going on for decades is referred to as the “crypto wars.”  

Based on past experience, the intelligence and law enforcement officials who disfavor ubiquitous strong cryptography will inevitably accuse pro-encryption supporters, including policymakers pushing for policies listed above, of aiding child pornographers, drug cartels, human traffickers, and terrorists. Pro-encryption policymakers should expect to be smeared. The anti-encryption narrative will focus on a special class of victims, not all victims.  

The rhetorical trick anti-encryptors will deploy is to ascribe good and evil to a thing: encryption. They’ll say, “Encryption is bad because bad people use it, and good people have nothing to hide, so they shouldn’t be able to use it.” To see how the trick works, let’s extend that logic to things we are more familiar with, then to encryption, and then show it “naked”:

  • Child pornographers, drug cartels, human traffickers, and terrorists use planes, trains, and automobiles for evil purposes, so good people shouldn’t be able to fly, ride, or drive.
  • Seat belts and airbags save many lives in traffic accidents, but in a few cases, they cause death, so cars shouldn’t have seat belts and airbags.
  • Bad people use encryption to do bad things, so good people shouldn’t be able to use encryption to do good things.
  • We can keep some people safe sometimes by keeping everyone unsafe all the time.   

When anti-encryptors are called on the illogic of their rhetoric, they switch to “Encryption can be good if cryptographers can give us “exceptional access,” a magical method to void encryption on whatever they want whenever they want. Even if that were possible (it’s not, which is why it’s a magical method), you have to ask: is it a smart strategy to have a world-scale encryption voiding mechanism that perhaps can be stolen or figured out and replicated by your enemies? Probably not.  

Finally, there are all sorts of encrypted products and services sold all over the world available to everyone, good or bad, and encryption itself is applied mathematics that anyone with the right math skills and desire can learn and use. Cyberattackers are already good at encryption—see ransomwareIt’s impossible to prohibit bad guys from using encryption. So, how does prohibiting good guys from using it help? 

End of the Road

There is a saying — “Once you know something is possible, the rest is just engineering.” That is applicable to the problem of cybersecurity. A certain resignation has set in, a belief that we just have to learn to live with ongoing and escalating cybersecurity failure and the loss of digital privacy. That’s not true. We know what the true cause of cybersecurity’s technological and policy failures are and that it is possible to start fixing them now. What remains to be seen is whether we, as a society, have the will to fix them and when we’ll get started.