Crowdsourcing Privacy Risk Assessment
Privacy policies, privacy notices, privacy statements, data policies, data handling policies, conditions of use, terms of service, terms and conditions–the list goes on. Privacy systems around the globe have adopted all sorts of policies to govern their collection, use, dissemination, and maintenance of others’ personally identifiable information (PII). But how do these different policies compare? Are some more transparent than others? Do some make a better effort at minimizing the amount of data they collect or better limit how they use the collected data? Do some make no effort at all?
As privacy systems and their policies become increasingly complex, decision-makers of all varieties (including individuals, national governments, and multinational corporations to name a few) will require tools that help them make better sense of the systems and policies they are dealing with. This project seeks to help such decision-makers by creating an interactive model that allows for the comparison of privacy policies from different systems.
The model provides a privacy score based on a privacy system’s implementation of Fair Information Practice Principles (FIPPs). To evaluate the implementation of the FIPPs, the model subdivides them into 93 system practices that a privacy system might follow. (For instance, the Transparency FIPP is broken down into six system practices, with one of them being how frequently a privacy system provides notifications to its users). The user analyzes each system practice for intrusion into or protection of privacy using a scale of 1-5. The model will average the user-inputted ratings for each system practice within a single FIPP, generating a privacy score for that FIPP. To compute a privacy score for the entire privacy system, the model will average the privacy scores generated for all eight FIPPs.
Since the privacy score depends on the accuracy of the (subjective) evaluations of the user, using multiple users’ scores to arrive at a generally agreed upon result is recommended. Weights can be assigned to different users’ scores to reflect the confidence (or lack thereof) in certain users’ scores.
- Powerpoint Presentation of Model
- Excel Workbook with Data and Model Simulation
- Model's Applicability to NISTIR 8062: Privacy Risk Management for Federal Information Systems (comments submitted to NIST)
Lance Hoffman (firstname.lastname@example.org)