The designer Aza Raskin tackled the design of privacy notifications & policies several years ago. His blog details his (and his co-designers’ and experts’) process to try to make more readable & targeted disclosures that would be meaningful to people.
1.) Making Privacy Policies Not Suck
Privacy policies are long legalese documents that obfuscate meaning. Nobody reads them because they are indecipherable and obtuse. Yet, these are the documents that tell you what’s going on with your data — how, when, and by whom your information will used. To put it another way, the privacy policy lets you know if some company can make money from information (like selling you email to a spammer).
Creative Commons did an amazing thing for copyright law. It made it understandable.
Creative commons reduced the complexity of letting others use your work with a set of combinable, modular icons.
In order for privacy policies to have meaning for actual people, we need to follow in Creative Commons footsteps. We need to reduce the complexity of privacy policies to an indicator scannable in seconds. At the same time, we need a visual language for delving deeper into how our data is used—a set of icons may not be enough to paint the rich picture of where you data is going.
Understanding Data Flows
With the rise of web services, your information can end up in unexpected places. To get a better understanding of some of the complexities of data flow, we sketch out how Anti-phishing works in Firefox (with help from Oliver Reichenstein).
Here’s what that looks like as a wall of text, which is the typical privacy policy mode.
The difference in understandability is huge between the text and the schematic. In fact, while we were working on creating this infographic we found a hole in our legalese and updated it accordingly.
The idea here is that by creating a visual schematic language, it is relatively painless way for a company to convert their wall-of-text into something a bit more approachable. And that the more visualization actually shines a light into the dense tangle of words, possibly highlighting flaws or trouble spots that would have otherwise remained hidden.
The simple form
The visual schematic language is a descriptive way of explaining a privacy policy and helps us to understand what’s going on underneath the hood. It doesn’t solve the problem of being able to quickly figure out the guarantees a privacy policy is making on your data.
For that, we want to move from the descriptive to the proscriptive, to a set of legally-bindings icons like Creative Commons.
As an experiment, we tried a schematic form of icons. The feedback that we’ve got so far is that the schematic is over-kill and that a set of icons more similar to Creative Commons’s would be easier to scan and understand. The next step is for us to come up with a set of orthogonal decisions about what compromises the most important aspects of a privacy policy. In the end, we probably shouldn’t have more than 5 icons in the interest of simplicity.
For now here are a set of axis we’ve come up with that need to be whittled down:
Is your information…
Shared with a 3rd Party? Shared internally within the company?
Anonymized/Aggregated before being stored or used?
Personally Identifiable?
Stored for more than x number of days?
Encrypted on the server?
Monetized (sold) in some way?
Usable to contact you?
2.) The Seven Things That Matter Most About Privacy
In late January we held a workshop that brought together some of the worlds leading thinkers in online privacy, with everyone from the FTC to the EFF represented. We spent the day working to answer the question: What attributes of privacy policies and terms of service should people care about? If you are new to the project, please read the original blog post, as it will answer a number of the probable nagging questions (like how to make icons enforceable).
The Should is Key:
The “should” is critical here. Privacy policies are often complex documents that deal with subtle and expansive issues. A set of easily understood and universal icons cannot possible encode everything. Instead, Privacy Icons should call out only the attributes which are not “business as usual”: the warning flags that your privacy and data are at risk.
Let’s take an example that came up at the workshop. Should we have an icon that lets the you know that your data will be shared with 3rd parties? Isn’t 3rd party sharing intrinsically a bit suspect? The answer is a subtle no. Sharing with 3rd parties should raise a warning flag but only if that sharing isn’t required. The classic example is buying a book on Amazon.com and getting it shipped to your home. Amazon needs to share your home address with UPS and Privacy Icons shouldn’t penalize them for that necessary disclosure. In other words, Privacy Icons should only highlight 3rd party data sharing when you do not have a reasonable expectation that your data is being shared.
An example of the multi-state icons found on the cloth tags.
The “should” is a major differentiator from many of the prior approaches, like the taxonomical P3P or Lorrie Cranor’s crowd-sourced Privacy Duck.
After synthesizing the input from the workshop as well as the numerous projects that have come before us, Lauren Gelman, Julie Martin, and I spearheaded the effort to boil down these “shoulds” into 7 attributes. The vision is that each attribute will correspond to an icon, and that each icon can have different states. A good example of a multi-state icon comes from the tag on your shirt that tells you how it should be cleaned.
The Proposal:
Here is the proposal for the information architecture of the attributes for the Privacy Icon. To be clear, there are no physical icons yet. Once we have general consensus on the attributes, we’ll begin work on designing the graphics both directly and via a Design Challenge.
- Is you data used for secondary use? And is it shared with 3rd parties?
- Is your data bartered?
- Under what terms is your data shared with the government and with law enforcement?
- Does the company take reasonable measures to protect your data in all phases of collection and storage.
- Does the service give you control of your data?
- Does the service use your data to build and save a profile for non-primary use?
- Are ad networks being used and under what terms?
For companies that go above and beyond by retain their data for a minimum amount of time, with minimal exposure, etc., we can also provide a “best practices” icon.
Explanation:
Is your data used for secondary use? The European Union has spent time codifying and refining the idea of “secondary use”; the use of data for something other than the purpose for which the collectee believes it was collected. Mint.com uses your login information to import your financial data from your banks — with your explicit permission. That’s primary use and shouldn’t be punished. The RealAge tests poses as a cute questionnaire and then turns around and sells your data. That’s secondary use and is fishy. When you sign up to use a service you should care if your data will only be used for that service. If the service does use your data for secondary use, they should disclose those uses. If they share your data with 3rd parties, then they should disclose that list too.
Is your data bartered. You should know when someone is making a gain off your back. You should also know roughly how and for what that data is being bartered.
Under what terms is your data shared with the government and with law enforcement? Do they just hand it over without a warrant or a subpoena?
Does the company take reasonable measures to protect your data in all phases of collection and storage. There are numerous ways that your data can be protected: from using SSL during transmission, to encryption on the server, to deleting your data after it is no longer needed. Does the company protect your data during transmission, storage, and from employees? This icon should tell you what the weak link is.
Does the service give you control of your data? Can you delete your data if you choose? Can you edit it? What level of control do you have over the data stored on their server.
Does the service use your data to build and save a profile for non-primary use? This is a subtle one, as we want to include the concept of PII (personally identifiable information). What we are worried about are companies secretly building a dossier on you — say by taking your email address and then buying more information from a 3rd party about that email address to get, say, your credit rating. Then using that profile for uses with which you haven’t agreed.
Are ad networks being used and under what terms? On the web most pages include ads of some form, and the prevalence of behavioral tracking is on the rise. Yahoo, for instance, can track you across 12% of the web (from personal correspondence). While letting users get a handle on ad networks is important, raising the alarm on every page would be counter-productive. We haven’t figured out yet how to handle ad networks and are looking for more thought here.
Next Steps:
The next steps are to socialize this list of privacy attributes and once we arrive on agreement, to begin the design process. Feedback is crucial at this juncture. Jump in.
3.) Is a Creative Commons for Privacy Possible?
There was a lot of great feedback for my post Making Privacy Policies Not Suck. We are now in conversation with a whole slew of industry leaders and deep thinkers in the area of privacy (Lorrie Cranor, Jonathan Zittrain, Lauren Gelman, Ryan Calo to name a few).
With all of the work that’s been done before us, I wanted to touch on some of the way our thinking and position breaks from the mold.
Bolt On Approach
Privacy policies and Terms of Services are complex documents that encapsulate a lot of situation-specific detail. The Creative Commons approach is to reduce the complexity of sharing to a small number of licenses from which you choose. That simply doesn’t work here: there are too many edge-cases and specifics that each company has to put into their privacy policy. There can be no catch-all boiler-plate. We seem to have lost before we begun. There’s another approach.
Here’s where we stand: Companies need to write their own privacy policies/terms of service, replete with company-specific detail. Why? Because a small number of licenses can’t capture the required complexity. The problem is that for everyday people, reading and understanding those necessarily custom privacy policies is time consuming and nigh impossible.
Here’s the solution: Create a set of easily-understood Privacy Icons that “bolt on to” a privacy policy. When you add a Privacy Icon to your privacy policy it says the equivalent of “No matter what the rest of this privacy policy says, the following is true and preempts anything else in this document…”. The Privacy Icon makes an iron-clad guarantee about some portion of how a company treats your data. For example, if a privacy policy includes the icon for “None of your data is sold or shared with 3rd parties”, then no matter what the privacy policy says in the small print, it gets preempted by the icon and the company is legally bound to never sharing or selling your data. Of course, the set of icons still needs to be decided (we’ll be having a workshop on the 27th of January to help figure it out).
This method means that without ever having to delve into the details, everyday people can glance at the simple icons atop a privacy to know if and how their data is being used. At the same time, it gives companies the flexibility required to create comprehensive and meaningful policies. We’ve found a way past the deadlock.
Nobody Will Use the Bad Icons?
Some of the Privacy Icons will have potentially a bad normative value, like the icon that indicates your data may be sold to third parties. The icon might even look scary. The question becomes, why would any company display such an icon in their privacy policy? Wouldn’t they instead opt to not use the Privacy Icons at all? This is the largest problem facing the Privacy Icons idea. Aren’t we are creating an incentive system whereby good companies/services will display Privacy Icons and bad companies/services will not?
If Privacy Icons become widely adopted (and I think Mozilla is in a unique position to help make that happen) then the correlation of good companies using the icons and bad companies not using the icons becomes rather strong. If a privacy policy doesn’t include any icons it’s synonymous with that policy making no guarantees for not using your data for evil. The absence of Privacy Icons becomes stigmatic.
Note that Mozilla has not yet decided to integrate this into product yet.Asking people to notice the absence of something is asking the implausible. People don’t generally don’t notice an absence; just a presence. The solution hinges on Privacy Icons being machine readable and Firefox being used by 350 million people world-wide. If Firefox encounters a privacy policy that doesn’t have Privacy Icons, we’ll automatically display the icons with the poorest guarantees: you’re data may be sold to 3rd parties, your data may be stored indefinitely, and your data may be turned over to law enforcement without a warrant, etc. This way, companies are incentivized to use Privacy Icons and thereby be bound to protecting your privacy appropriately. With Firefox growing past 25% market share, we are in a position to affect critical-mass adoption.
There are other options as well; like crowdsourcing tentative Privacy Icons for a website whose privacy policy does have icons yet (and deferring to the company’s as soon as they put them up).
Lawyer Selected, Reader Approved
Since it’s release, Creative Commons has continually pared down the number of licenses it provides and is now down to just two icons, one with two states and one with three. It has to be so simple because everyday people choose their own license. Privacy Icons don’t have that constraint. A qualified lawyer chooses what icons to bind to their privacy policy, and so there can be substantially more icons to choose from allowing the creation of a rich privacy story. As long as the icons are understandable by an everyday person, we are golden.
Next Steps
This blog post lays out the groundwork for how we are thinking about crafting Privacy Icons. We still need to figure out what the icons and their states will actually be (as well as if this approach makes sense). Ahead of the Federal Trade Commision Privacy Roundtable, we will be hosting a workshop to discuss and creating solutions (or at least next steps) toward a more meaningful privacy framework over the web.