The Rule of Law in Cyberspace by Mireille Hildebrandt
Jul 16, 2013
Cyberspace has long been understood as an unlimited, non-material space, where the law of gravity does not apply and where by default anything and everything is possible. Understood in this way cyberspace does not exist: this somewhat exalted notion of cyberspace is an erroneous abstraction of the harsh reality of the hardware, software and protocols that make cyberspace possible, but also constrain it. I focus on a subsequent transformation, which has been coined as a ‘computational turn’.
The transition to this most recent ICT infrastructure means that our perception and cognition are increasingly mediated by computational technologies. The manipulation of data (the zeros and ones of digital computation) appears to offer unprecedented possibilities to represent reality in a multiplicity of ways by means of calculation, simulation and prediction. As a result of this development, cyberspace is no longer the safe haven where no one can figure out that you are actually a dog, where you can begin a second life free from all manner of conventions without any of your nearest neighbours knowing about it, or where, through crowd-sourcing, all the knowledge of the world can finally be put together.
Even more so than the face-to-face environment, cyberspace is now primarily the space where what you have done is ‘known’ and what you are going to do is anticipated. Your behaviour is continuously recorded in bits and bytes and is compared with the behaviour of others similar to you, to determine your preferences, anticipate high-risk behaviour, modify prices or predict health problems. And the more cyberspace is capable of predicting the future, the more it appears to create the future.
The issue evoked by the new computational order has far-reaching implications for the way we act, decide, perceive and know. This requires not only new legal rules or interpretations of existing law, but above all a prudent reflection on the manner in which the law exists and hence the relationship between law, state, technology and society. Indeed, technology often controls us, while we attempt to control it. Or, to cite the American philosopher of technology Don Ihde: new technology invents us, while we invent it.
In the 1990s a new technological vision emerged at the interface of cyberspace and cybernetics, called ubiquitous computing, Ambient Intelligence or the Internet of Things. As a result, cyberspace overflowed its banks: by placing RFID chips and sensor technology in every nook and cranny of our physical environment, we can finally take ‘the offline world online’, as formulated by the International Telecommunications Union. Through the expanding proliferation of Apps, the smartphone effectively connects the online world with the local physical world; for example, consider augmented reality and location-based services. This blurs the distinction between our online and offline environments, demonstrating how cyberspace has passed the stage of a free-floating, virtual non-space. Cyberspace is ‘everyware’: it is our information-driven environment, mediated by the artificial intelligence of a growing number of computational techniques.
The Rule of Law is at stake due to three developments: (1) the computational order of cyberspace increasingly determines our perception, our cognition and the decisions that confront us, while its algorithms are invisible, incomprehensible and often secret, (2) the refined knowledge at the aggregate level can make invisible infringements of the rights to privacy, data protection and the right not to be subject to prohibited discrimination; because these infringements are invisible, the right to contest them is also at issue, and (3) the normative implications of the new ICT infrastructure can easily overrule the normative force of applicable legal norms, turning written law into a paper dragon.
The knowledge that companies, research centres and government agencies use to make their decisions has become more and more deeply involved in the political economy of what is now called Big Data: medical diagnoses and the corresponding treatment plans, insurance premiums, energy usage management, public and private traffic management, border control, access to employment, criminal investigations, sentencing, the granting social welfare benefits and the combating of social security fraud. All of the sciences, including the humanities, are becoming dependent on automated pattern recognition.
The Rule of Law is linked to the protection of fundamental rights. They were initially invoked against the state, but increasingly they have become enforceable against other powerful players. The nearly permanent and ever more extensive processing of data by the public and private sectors easily results in violations of the fundamental right to privacy and the right to fair and legitimate processing of personal data.
Privacy fundamentalists (excusez le mot) often appear to fall back on the idea of data minimisation as the default for the information society. Although this is an excellent point of departure in a number of cases, its ‘blind’ application can lead to failure of the information-driven society. Unless we reject data-driven infrastructures we must develop smart minimisation, rather than minimal disclosure per se. This should be based on an adequate assessment of the risks of data sharing.
Although cyberspace was once seen as a place where the sun always shines because social control, government inspection and commercial interference are absent, it has now become clear that even an umbrella cannot protect us from a personalised downpour, based on unobtrusive surveillance and refined pattern recognition. Simply referring to the possibility of taking an umbrella no longer suffices. Law and the Rule of Law in cyberspace will, to a certain extent, depend on a new technological articulation of legal protection.
Legislators in democracies should consider the design of cyberspace, and the courts should consider its implications. To preserve our relative autonomy, privacy and the capability to participate in public and private versions of the good life, will require that we defend them, or better still, reinvent them at the level of the architecture that should at least afford them.
SOURCE: This “article” was taken from Mireille Hildebrandt’s Inaugural Lecture. Ms. Hildebrandt is Chair of Smart Environments, Data Protection and the Rule of Law at the Institute of Computing and Information Sciences (iCIS) of Radboud University Nijmegen.