[Previously published on the IAPP Privacy Perspectives]

Nascent is a term I often use to describe the field of privacy engineering. Not until this fall have the first students of Carnegie Mellon’s Masters of Science in Information Technology—Privacy Engineering started in the newly formed one-year program. And only in the past year or so have Google, Microsoft and other techno-centric firms been advertising openings with variations of privacy engineer in the title. Though the term privacy engineering has been around since at least 2001, only recently has the computer science community tried to use it in a concrete and systematic way.

So what is privacy engineering?

Simply put, it is the inclusion and implementation of privacy requirements as part of systems engineering. Those requirements may be functional or nonfunctional. In other words, privacy may be a necessary function of the system (think TOR as an example) or it may be a beneficial additive but not absolutely essential for the system to operate. Most privacy requirements fall in this latter category.

The goal of the privacy engineer is to create and follow a repeatable process, such that application of the process to a given system under the same conditions will lead to consistent results. The first step in this process is to identify the privacy requirements that should be applied. This is done by incorporating standard or baseline privacy requirements and by looking at the privacy risks, not to the organization but rather to the subject of information held by the system. Common frameworks, such as Ryan Calo’s Subjective/Objective harms or Daniel Solove’s Taxonomy of Privacy, can be used to identify potential privacy problems and then a determination would need to be made of probability of occurrence and severity of impact—risk being a function of probability and severity.

In identifying risks, it is important that the engineer look at the entirety of the system and how people will interact with it. Is there a mutuality of expectations? How do cognitive biases and human irrationality affect their privacy? Does the user experience enhance or detract from privacy? Finally, is the purported benefit of the system legitimate and proportional to the privacy risks? It may be a case where the privacy engineer needs to step back and say, “This isn’t worth the risks and no control can sufficiently mitigate the problems I’ve found.” Assuming this isn’t the case, the next step of the privacy engineer is to identify controls to address the risks.

Controls come in several forms. The one most familiar to the reader will be policy controls, which dictate when and what information can be collected, how it is to be stored and other internal rules that should be followed when dealing with data flowing in, out and within the organization. There are also a host of technical point controls—such as data minimization, encryption, randomization—which can be applied to increase privacy within the system. Finally, and less common, are architectural controls—primarily anonymization and decentralization—which serve to lessen the probability that a harm occurs.

Of course, once all these controls are in place, a new risk analysis must be done. It is an iterative process. Consider an online bookstore of sensitive topics that identifies a risk to clients of being adversely associated with the books they order. The store then decides not to collect names from purchasers as a control (anonymization). However, this decision has created a new risk. When delivered to the purchaser, other residents at the delivery address may open the delivery because the recipient is unidentified, thus exposing the purchaser to exposure at home. This is an instance of a risk-risk trade-off, and this risk must also be managed. Further controls, such as an optional identifier, should then be considered. Even when controls mitigate risks without any side effects, enough residual risk may remain to warrant additional mitigation.

With the daily barrage of news accounts of privacy stumbles and a public growing weary of the constant assault on their information, the role of the privacy engineer is becoming necessary for more and more forward-thinking organizations. It is no longer sufficient in the privacy profession to mitigate for organizational and compliance risks. Personnel must be in place to identify user-centric risks and help design solutions that mitigate those risks and provide the organization the information it needs to operate. The privacy engineer is that person.