Skip to Content

2013: Big Privacy, Technology & Law

Sogeti Labs
December 21, 2012

Want 110 predictions for the next 110 years, or do you prefer a list of 20 for 2013, topped by Big Data? Being just an ordinary down-to-earth type of guy, I predict that the next twelve months will see a profound change in how we deal with the ePrivacy-Technology-Law triad. What’s at stake? At stake is what we call Fair Information Practice, the foundation of a trusted Digital Economy based on Big Data. At stake is your PII, your Personally Identifiable Information, or simply your contextual ID, which serves as the currency and lubricant of this truely New Economy. Borrowing from Alex Alvaro, I have sized down his PII scoreboard to this practical set of FIPPs (Fair Info Practice Principles), for everyone to memorize: A Sentimental Journey  Without playing down the subject, it is safe to maintain that sentiment rules the 21st-century privacy discussion, which was deliberately started in January 2001 by Simson Garfinkel in his book Database Nation: The Death of Privacy in the 21st Century. Privacy, Garfinkel states, is the right to control your PII, and in contrast to the Orwellian totalitarian Big Brother vision we now have to deal with countless free-market Little Brothers that are capable of doing exactly the same or even worse. Privacy, Technology and the Law For the first time in history, the current 112th U.S. Congress has installed a special Senate Committee dealing with Privacy, Technology and the Law. That of course is no coincidence and actually a telling fact, only take a look at the jurisdiction of this committee. In a trusted Digital Economy there are profits and penalties, you win some and lose some, but central privacy issues accrue around the evergreen Right to be Left Alone and the brand-new Right to Be Forgotten. Expert Talk on Big Data and Privacy            Three articles appeared this year that tolled the bell. The first in the February edition of Stanford Law Review: Privacy in the Age of Big Data: A Time for Big Decisions. The second one, The challenge of ‘big data’ for data protection, was published in the May issue of the Oxford Journal on International Data Privacy Law. And the third, Privacy by Design in the Age of Big Data, co-authored by IBM’s Big Data guru Jeff Jonas, came from the Office of the Information and Privacy Commissioner of Ontario, last June. Typical application areas for the Privacy by Design approach are: 1      CCTV/Surveillance Cameras in Mass Transit Systems 2      Biometrics Used in Casinos and Gaming Facilities 3      Smart Meters and the Smart Grid 4      Mobile Devices & Communications 5      Near Field Communications 6      RFIDs and Sensor Technologies 7      Redesigning IP Geolocation Data 8      Remote Home Health Care 9      Big Data and Data Analytics PII and PETs Organisations handle employees’, customers’ and third parties’ Personally Identifiable Information (PII) in a number of ways and for a variety of reasons. When doing this, it is important that privacy is taken into account. Privacy Enhancing Technologies (PETs) provide a mechanism that helps with this, and can be used in conjunction with higher level policy definition, human processes, training, etc. A good description of PETs is provided by UK Information Commissioner’s Office as “any technologies that protect or enhance an individual’s privacy, including facilitating access to their rights under the Data Protection Act” (ICO, 2007). Also, “the use of PETs can help design information and communication systems and services in a way that minimises the collection and use of personal data and facilitates compliance with data protection rules making breaches more difficult and/or helping to detect them” (EU, 2007). Taxonomy of Privacy and PETs     Daniel Solove’s Taxonomy of Privacy may serve as a Privacy Impact Assessment (PIA) framework for considering  PETs to reduce privacy-related harm to employees, customers and partners: 1   Information Collection (use, storage, and manipulation of collected data) Harms: Surveillance, Interrogation 2   Information Processing Harms: Aggregation, Identification, Insecurity, Secondary Use, Exclusion 3   Information Dissemination Harms: Breach of Confidentiality, Disclosure, Exposure, Increased Accessibility, Blackmail, Appropriation, Distortion 4   Invasion Harms: Intrusion, Decisional Interference This link with Solove’s taxonomy is being made in an excellent “review,” as they call it, of PETs by HP Laboratories. The authors present the following accompanying taxonomy of Privacy Enhancing Technologies: 1   PETs for Anonymisation 2   PETs to Protect Network Invasion 3   PETs for Identity Management (authentication and authorisation without identification) – Credential Systems – Trust Management 4   PETs for Data Processing – Privacy Preserving Data Mining – Privacy Management in Data Repositories 5   Policy-Checking PETs Improving the PET approach Technologies are interesting and necessary but their success always depends on human adoption and use. Therefore HP Labs recommends a continuous focus on enhancements in the following areas: 1  Usability 2  Privacy by Design 3  Economics of Privacy As for economics, this means that the cost of choice in ePrivacy matters, although not high, to individuals generally is not worth the perceived benefit. The so-called Willingness to Accept clearly rules over the Willingness to Pay. Differential Privacy for Everyone I would like to add the relatively unknown notion of Differential Privacy in the context of Database Privacy. This issue nicely correlates to the title of Garfinkel’s 2001 book Database Nation, mentioned above. Ensuring the privacy of individuals in databases can be extremely difficult even after Personally Identifiable Information has been removed from these databases. With enough effort it often is possible to correlate databases using information that is traditionally not considered identifiable. If any one of the correlated databases contains information that can be linked back to an individual, then information in the others may be linkable as well. Differential Privacy helps address such re-identification and other privacy risks. It does this by adding noise to the otherwise correct results of database queries. The noise helps prevent results of the queries from being linked to other data that could later be used to identify individuals. Differential privacy needs to be matched with policy protections. I wish you all a merry Xmas and an amazing 2013 with undoubtedly many Big-Time Big Data developments in the combined field of ePrivacy, Technology and the Law!

About the author

SogetiLabs gathers distinguished technology leaders from around the Sogeti world. It is an initiative explaining not how IT works, but what IT means for business.

    Comments

    One thought on “2013: Big Privacy, Technology Law

    Leave a Reply

    Your email address will not be published. Required fields are marked *