Understanding data privacy in a fully connected society
The IoT community has to ensure that the changes that it’s making to the world are safe and affect people and organisations for the better. One critical issue here involves the ethics of data usage. Saverio Romeo, principal analyst at Beecham Research, recently spoke with Dr. Kat Hadjimatheou, senior research fellow at the Interdisciplinary Ethics Research Group at the University of Warwick, and a specialist in the ethics of security and surveillance technologies.
SR: The IoT community is creating spaces for us to live and work in which are connected and intelligent – from fitness trackers to temperature sensors. What are the ethical issues involved in such a fully connected society?
KH: There are essentially three key considerations, starting with the kinds of information that are being processed. Some kinds of information are considered inherently private involving intimate aspects of our lives, such as our friendships, families, sex lives, and our health. Information about our energy consumption or purchasing habits isn’t inherently private, but this doesn’t mean we want it shared with everybody.
This leads me to the second consideration, namely who’s viewing the information. Most of us don’t mind non-intimate information about us – such as our energy consumption or our purchasing habits – being collected so we can receive better deals from our energy providers, or advertising that reflects our preferences. At the same time however, most people would feel uncomfortable about their purchasing histories being shared with their mother-in-law, even if she might buy them a more appropriate Christmas present. Alternatively, even with some very intimate information, there are usually some people we happily share it with, such as doctors. In order to preserve privacy, we need to be able to choose who we share information with.
This leads us very conveniently to the third consideration – whether we have any choice about the kinds of information collected and what’s done with it. A ‘liberal’ society such as ours is built partly upon the belief that people should as far as possible be free to decide many things for themselves. For example, most people believe that living a healthy lifestyle should be a matter of personal choice, and not something imposed by others. Even if it is better for us to have connected devices that monitor our food consumption and exercise, we believe people should have a choice about whether to adopt such devices. This does not however mean that everything should be a matter of choice – some of the decisions we make about our own bodies affect the wellbeing of others.
For example, it is nowadays commonly accepted that one’s personal choice to smoke should not affect others. Nevertheless, in a liberal society personal choice is always very important and must be taken into account. Technologies and systems that enable fine-grained choices, at different points in the process of technology adoption, are better than those that offer only a one-off, single chance to ‘agree’ or ‘disagree’.
More opportunities to consent or opt-out decrease the risk of what is known as ‘function’ or ‘mission’ creep. Mission creep occurs when data collected for one purpose is then used for another. Some smart metering systems collect temperature data – which can also be used to determine how many people are at home. If people sign up to a system to know when to turn the radiators on or off doesn’t mean that they’ve also signed up to a system that monitors the number of people in the house.
SR: Many believe that privacy by design should be a fundamental criteria in designing IoT solutions. How do you define privacy by design?
KH: Privacy by design means building privacy concerns into the design of technologies, processes, and systems right from the start. One of the basic principles involves data minimisation. This means collecting only the amount of data necessary to fulfil the specific functions of the device/system involved and reduces the risk of function or mission creep. Most of us also feel that a system which involves automatic processing of data with no human involvement is less intrusive than one which involves people actually accessing that data.
SR: The IoT community believes that data is strategic – in business and service terms. In order to fully exploit that potential, there is a school of thought that believes in data openness. From an ethical perspective, how do you see data openness?
KH: It depends what kind of data is being discussed! Obviously, making the code for very intrusive surveillance software open-source is not a sensible idea, given the risk of cyber criminality, misuse by authoritarian regimes, not to mention terrorism. For people unlucky enough to live in human-rights abusing regimes, encryption is a vital tool to preserve their privacy. Less dramatically, the collation of data from different sources can identify individuals who believe that they’re interacting anonymously. Data openness is good when it involves governments operating in a democracy, because transparency and accountability are implicit. By contrast, customers do not owe transparency to the companies they do business with. Businesses who collect data about their customers have a duty of care to those people to process the data in ways that do not infringe on their privacy, even when customers have, perhaps unthinkingly, clicked that ‘agree’ button.