The current state is not encouraging. App makers trying to address these legal requirements usually do so by requiring the users to tick on a ‘” have read and accept …” box before giving access to the the application. But no one really reads these statements any more, which is hardly surprising when these privacy policies are long, contain legal statements, and are often not written in the native language of the user. How well then is the app maker protected?
On the other hand, the amount of private data that we share through the use of apps is growing rapidly. So we, as the users, are facing a situation where we are giving up private data but effectively know less and less about what happens to it. Hoping that the company has a “do no evil” policy is not really going to be helpful, especially since this data is long-lived. Who knows whether your movie viewing habits will be used for profiling you during a job application 10 years from now? Do they really need to know you had a phase of watching zombie movies?
Time is also a risk for the app makers. When they store the GPS and heart beat data of the run you did today, they may be compliant with the law of the country you are in now. What happens if you move to a new country, or if the country you live in changes its legislation? Would the app maker, with the best of intentions, be aware of that? Would they stand a chance of staying compliant in each and every case? I’d say hardly, and I bet there are many app makers that inadvertently are non-compliant–and probably not in a trivial way either.
The talk I attended today was by IBM researchers and they did show some alternative approaches that are worthwhile considering.
Take the case of a running recording app, where the user uploads the track data of individual runs, plus general data about the user (gender, age, location) to the app’s data store. As storing this data is the central purpose of the app, there are no real alternate options. But the user could be able to give more detailed consent as to how far information may be shared. Should details of each run be shareable, or only summary information (actual tracks vs distance or number of runs)? What category of agencies should this be shareable to: friends, your health app, advertisers, your health insurer?
To make this feasible, and to avoid the trap of each and every app maker creating their own (and ultimately confusing) interface, IBM researchers propose that a common standard be provided, a little like the existing dashboards that our mobile operating systems already have for similar situations (notification settings). For example, a “data sharing” control center where the user can see all the apps, and for each, be able to specify the consent to a specific degree of sharing of their data with categories of agencies.
I find this concept appealing because as a user I can change or even completely retract my consent to share data at any time. And as an app maker, there is also a reasonable vision by suggesting that that this needs to be supported by a standard set of tools that allows the tagging of the shared data with the consent given. Once tagged, the app maker can then use standardized tools to enforce policies that are compliant with the applicable legislation of the country in question and provide an audit trail for compliance checks.
In essence the tools the IBM researchers are talking about are implemented as an architectural layer, similar to authentication and authorization layers, using standardized protocols that are verifiable. The software development effort simplifies because it avoids having to continually reinvent the wheel, and it becomes easier to certify that an application is privacy compliant, removing potential legal headaches for the app makers’ legal departments.
Obviously this approach won’t be the cure all for our privacy woes. We are yet to really understand the data convergence that is starting to happen, where the private data that we shared to various app makers over time will be aggregated by other third party agencies–maybe advertising agencies, maybe market research agencies, or others we don’t (want to) know about. What happens to the privacy consent we gave for the fragments that now make up the bigger picture about us? One possible answer was hinted at in the beginning of the talk that indicated that most privacy legislation always talks about a concept of proportionality: that data is shared for a specific purpose only and cannot be used beyond that. However it raises the question of what constitutes a specific ‘purpose’?
Clearly this discussion is far from over. Yet it is heartening to see the discussion happening.