FTC Do Not Track Response

This response is due on February 18, 2011.

Here is the FTC Do Not Track White Paper: http://ftc.gov/os/2010/12/101201privacyreport.pdf

Submissions go here: https://ftcpublic.commentworks.com/ftc/consumerprivacyreport/

Staff info is here: http://www.ftc.gov/opa/2010/12/privacyreport.shtm

Other submissions are here: http://www.ftc.gov/os/comments/privacyreportframework/index.shtm

Advice About Writing:
 * Be clear - a lackey is reading and summarizing points
 * Keep it short - don't try to answer all the questions they ask, just the ones you care about
 * When you're writing, make each point in a heading, so that if you took out the text it'd still be there
 * Careful about tech mandates (ie, OAuth)

MAIN POINTS: That companies who have transactional data with people should be mandated to make available one-click authentication to support people getting access/linking their records together.

Today, if you look at the steps you have to go through to get your own records now, it is like 6-12 hard steps you have to do manually each month or something.

The Honorable Jonathan D. “Jon” Leibowitz, Chairman Federal Trade Commission 600 Pennsylvania Avenue, NW Washington, D.C. 20580

RE: Public Comments on Deecember 2010 FTC White Paper: Protecting Consumer Privacy in an Era of Rapid Change, A Proposed Business and Policy Framework

Dear Mr. Leibowitz:

We represent a community of end-user advocates and technology innovators focused on individual rights and access to individuals' own personal data, and the business and innovation opportunity that this new user-management and control offers (our full list of names are noted at the end of this letter).

First, we want to outline where we are coming from, and then we will comment on how this future-oriented view informs our response to the Green Paper and questions for further comment.

Personal Data Storage and Services A Middle Way between Do Not Track and Business as Usual Stalking

There is a way to deal with users' personal data that most have not yet explored. This alternative approach sits between the two extremes of a familiar spectrum: either Do Not Track, or Business as Usual Stalking.

On one end of the spectrum is the “Do not track” view, which relies on using technology and a legal mandate to prevent any data collection (as per the FTC Proposal). In this scenario, cross site behavioral targeting is suppressed because users signal they do not want any information to be collected on them as they move about the web. In this approach the economic value advertisers have been getting through higher click-through rates by providing more targeted ads is eliminated and sites that receive revenue from serving targeted ads is reduced if not eliminated. The economic value of the data is not captured by the end-user or the media/advertising/data aggregating complex.

And on the other end of the spectrum is the mode where we leave “Business as usual” in place as it's developed in the last few years. The door is wide open for ever more “innovative” pervasive and intrusive data collection and cross referencing for behavioral targeting, developing profiles - digital dossiers created on billions of people, without their knowledge or consent, based on IP address, device identification, e-mail address etc. The status quo is highly invasive of people’s privacy, linking their activities across contexts they wish to keep separate or private if they chose to do so. In addition, decisions about people's lives are beginning to be made from such data, and they are not aware of it. Economic value is derived, but at the expense of the basic dignity and privacy rights (ie personal control) of the individual.

Personal data storage services are emerging, representing a middle way through, to provide an opt-in mode with greater choice and control to the individual of their data AND offer greater economic value to the business community with huge innovation opportunities. As envisioned, Personal Data Storage Services allow individuals to aggregate their personal data, to manage it and then give permissioned access to businesses and services they choose -- businesses they trust to provide better customization, more relevant search results, resulting in increased value for the user with their data.

Over the last year, activity in this space has grown tremendously. In this emerging field of innovation, we have identified over ten startups, at least three open source projects, several technical standards efforts in recognized ISO’s along with companies in the web, mobile, entertainment and banking industries considering this model.

One of the most important things about this emerging space is that it has engendered active business development both in the United States and across Europe. In other words, this model is viable across North American and European privacy regimes. Furthermore, this model offers the possibility of achieving global interoperability, one of the key goals articulated by the Commerce Department for this forthcoming set of policies and regulations.

People are the Only Ethical Integration Point for Disparate Data Sets

Today there is a personal data ecosystem emerging in which almost everyone unknowingly participates but without the personal individual controls to afford user centric privacy. People unwittingly emit information about themselves, their activities and intentions, in various digital forms. It is collected by a wide range of institutions and businesses with which people interact directly; then it is assembled by data brokers and sold to data users (ie businesses that exploit our data without including us in the transaction). This chain of activity happens with almost no participation or awareness on the part of the data subject: the individual.

We believe that the individual is the only ethical integration point for this comprehensive and vast range disparate personal data. For example, the list of data types below was put together by Marc Davis for the World Economic Forum talk: Re-Thinking Personal Data event in June of 2010. It highlights the vast range of datasets about an individual that might be in some digital form in some database somewhere.

Identity and Relationships
 * Identity (IDs, User Names, Email Addresses, Phone Numbers, Nicknames, Passwords, Personas)
 * Demographic Data (Age, Sex, Addresses, Education, Work History, Resume)
 * Interests (Declared Interests, Likes, Favorites, Tags, Preferences, Settings)
 * Personal Devices (Device IDs, IP Addresses, Bluetooth IDs, SSIDs, SIMs, IMEIs, etc.)
 * Relationships (Address Book Contacts, Communications Contacts, Social Network Relationships, Family Relationships and Genealogy, Group Memberships, Call Logs, Messaging Logs)

Context
 * Location (Current Location, Past Locations, Planned Future Locations)
 * People (Copresent and Interacted-with People in the World and on the Web)
 * Objects (Copresent and Interacted-with Real World Objects)
 * Events (Calendar Data, Event Data from Web Services)

Activity
 * Browser Activity (Clicks, Keystrokes, Sites Visited, Queries, Bookmarks)
 * Client Applications and OS Activity (Clicks, Keystrokes, Applications, OS Functions)
 * Real World Activity (Eating, Drinking, Driving, Shopping, Sleeping, etc.)

Communications
 * Text (SMS, IM, Email, Attachments, Direct Messages, Status Text, Shared Bookmarks, Shared Links Comments, Blog Posts, Documents)
 * Speech (Voice Calls, Voice Mail)
 * Social Media (Photos, Videos, Streamed Video, Podcasts, Produced Music, Software)
 * Presence (Communication Availability and Channels)

Content
 * Private Documents (Word Processing Documents, Spreadsheets, Project Plans, Presentations, etc.)
 * Consumed Media (Books, Photos, Videos, Music, Podcasts, Audiobooks, Games, Software)
 * Financial Data (Income, Expenses, Transactions, Accounts, Assets, Liabilities, Insurance, Corporations, Taxes, Credit Rating)
 * Digital Records of Physical Goods (Real Estate, Vehicles, Personal Effects)
 * Virtual Goods (Objects, Gifts, Currencies)

Health Data
 * Health Care Data (Prescriptions, Medical Records, Genetic Code, Medical Device Data Logs)
 * Health Insurance Data (Claims, Payments, Coverage)

Other Institutional Data
 * Governmental Data (Legal Names, Records of Birth, Marriage, Divorce, Death, Law Enforcement Records, Military Service)
 * Academic Data (Exams, Student Projects, Transcripts, Degrees)
 * Employer Data (Reviews, Actions, Promotions)

In addition to this list, there is also the emerging wellness, or "quantified self," data that some users are beginning to collect about themselves through life-tracking companies including daily or more granular statistics about their bodies and wellness activities.

Service Providers Must Work For the End-User Most people do not host their own e-mail servers or websites on servers in their basements. Similarly, most individuals will not have the technical skill or desire to actually manage the collection, integration, analysis, permission management and other services needed to derive value from their data. However, the fact that a few users can host their own email means the open standards for email and http are available top to bottom. We what to see Personal Data Services available through open standards, open source code and an ecosystem that will interact with people who host their own PDS.

But mostly, individuals need to be able to trust that the service providers in Personal Data Ecosystem are working on user's behalf. Given the sensitivity of the data, and the complexity of running your own servers, most users will rely on Personal Data Service providers. In addition, market models need to emerge that support the Personal Data Store Service Provider making money while working on the users' behalf. The Personal Data Ecosystem Collaborative Consortium has a Value Network Mapping and Analysis project to outline this model and is raising money to support and foster the model.

Personal Data should be treated like Personal Money. Individuals must be able to move data between service providers, as they can move money between banks, retaining its value. However, with user's data, it's the user that is the provider, but there must still be many takers because of open data formats, activity streaming, and clear identity models that are also portable and separate from the data bank.

End-user choice and the right to transfer data from one service provider to another is key to this model. Just as our money does not become worthless when we move it from one bank to another, the same needs to hold true for individuals’ data.

Consumers need to be able to to Collect and Aggregate Their Data from Product and Service Providers For this Personal Data Ecosystem and Economy to emerge, it is essential that consumers have easy access to their data from the providers they do business with. The steps involved in getting data out of services are tedious and onerous, and often multi-step because we don't have clear "patterns" and open standards for getting data, nor do we require companies to give a copy of your complete data.

1. Data must be available in machine readable ways using open standards such as microformats and activity streams that are driven by many developers and users, not just a single company. Where data export is available, it is often not machine readable. Manually exporting repeated monthly statements as they are issued is not the answer as a few services offer.

2. Simple Internet Open Standards like OAuth allow for account linking without the dangerous practice of giving a username and password to various service providers. Instead, an OAuth token is issued, and username and PW are passed only to the issuing party. This keeps users from sharing login information with unscrupulous services and means the OAuth provider doesn't have to "police" a service just to manage login credibility.

3. Portability of data is critical for many reasons, including in managing a business failure. People need to be able to move their data to an alternate and hopefully more viable provider in these instances. Additionally, to create competition and innovation for Personal Data Services, data must be portable to prevent "lock-in" -- which is currently what many businesses use to prevent users from going elsewhere.

Data persistence and portability is critical so that as services disappear, user data and digital assets (for example, the social bookmarking site Del.icio.us makes personal data available to users and it's been used a lot recently after Yahoo! was reportedly shopping the website) will persist. Users create content and generate data during site usage and those users should be able to easily export their work product from those sites. Business models should not rest on "locked-in" data from users.

The Commerce Department should recommend that Congress legislate basic data portability together with a framework for prohibiting cross-site aggregation about a user unless the user agrees to have their data aggregated. Then, the FTC would enforce data portability and the prohibition of cross-site aggregation of personal data without user's explicit permission.

Create a Level Playing Field around Data Aggregation and Services Which companies can do what with what kinds of data?

Today the regulatory patchwork associated with data protection means that different types of data are subject to different protections affecting how different industry sectors use and compete in relation to personal data (ie Hipaa data or financial data or educational data which are regulated verses other personal data which is not very regulated).

For example, Google and Facebook have vast collections of data about individuals -- resulting from their activities on Google's and Facebook's sites/systems: what user's click on, who they know, what they search for, where they go etc. Sites analyze these data sets and then provide "relevant" ads based on the site's best guess as to the user's their activities.

Today with mobile devices connected to the web, mobile carriers collect a very similar set of data - where an individual goes, who they call and text, where they go to on the web. Yet mobile carriers are subject to very different (and more strict) regulatory regimes which prohibit them from using this data as freely as Google and Facebook.

A model where 1. individuals choose a data service provider where each individual collects and aggregates their data in a “data bank” and 2. can freely consent to providing access to it to 3rd and 4th party service providers, will result in greater individual data control while providing businesses with more accurate and comprehensive personal (at whatever level people choose: anonymous, pseudonymous or named) profiles, creating enormous market and business opportunities because the businesses that want these interactions can count on the data quality and the desire to interact. Right now, advertisers have imperfect data and are forced to "buy" far more reach than is necessary in order to get to those who are interested.

Keeping our Data for a Lifetime, If We Want to What if the individual could choose to retain all or a subset of the information about themselves for as long as they wanted? This is a graph that shows today's current data environment and a future where people are in control of their own data, and the opportunities around opt-in, more reliable data than stalking users surreptitiously currently permits.

CHART

The red line shows us what’s happening today: some data aggregators are necessarily self-regulating by limiting the amount of time they keep data, and governments are limiting data retention and anonymization practices. And much data that is collected is without explicit permission, other than through onerous privacy policy the user agrees to once (usually) and

The green line shows us what WOULD happen if people were given the capacity to store and manage their own data – if they could keep as much data as they wanted for as long as they wanted, in their own data banks. Digital footprints reflecting a lifetime could be shared with future generations, people could self assess, and applications through a marketplace would emerge to create new businesses and data uses we haven't yet thought of to date. In this user-centric model, the individual can aggregate information about themselves, where new classes of services -– more specific to the individual, based on data accessed with user permission, can emerge.

The foundation of this eco-system is personal data storage services that are totally under the control of the individual. But a user-centric identity system needs to function in partnerships with it (separate from a PDS) and we will need a regulatory regime that supports both of these technology solutions in user-centric form, where users own and control their own data.

These new data and identity service providers will be more viable if individuals can have simple ways to link their accounts and data together if the user desires, even if multifaceted identity systems reflect a complex personal outlook to the world. One thing to note is that in systems that offer multiple faceted identities under one login, that men reportedly maintain two identity facets, but women are averaging six (this statistic was reported to us personally from individuals at Diaspora, the open source social network). Indentity systems need to be flexible to accommodate user needs with a variety of requirements. And of course, simplifying the login and password problem people face online is something we support heartily.

The model presented above, a Personal Data Ecosystem where individuals are in control of their own data, aligns with the interests of all the stakeholders the Commerce Department is seeking to balance.

Companies who collect personal data win: by sharing and synchronizing with people’s personal data stores, companies get far more accurate information. New services can be offered on data sets, including data not previously permitted to be used or accessed for providing services (telephone log records or mobile geolocation data, for example). And innovation for the PDS and apps marketplace would be a huge new area of development for startups and large companies alike.

People win: by collecting, managing, and authorizing access to their own personal data, users will increase their trust and use of digital realms. This empowers people to work together in communities and groups more efficiently and effectively. Users will be able to see themselves reflected, and participate in transactions more directly with vendors.

Regulators, advocates, and legislators win: by protecting people with new frameworks that also encourage innovation and new business opportunities, government can give people useful tools to interact with agencies because user's identities are trusted.

Thank you for the opportunity to share our world view on personal data. Attached below please find our specific answers to the Green Paper questions.

Kaliya Hamlin, Personal Data Ecosystem Collaborative Consortium director@personaldataecosystem.org @identitywoman Mobile: 510-472-9069

Mary Hodder, Citizen, User Advocate, Founder and Entrepreneur mary@hodder.org @maryhodder Mobile: 510-701-1975

Co-Signers:

Sarah Allen, CEO Blazing Coud, Inc.

Stacy Banks, Citizen

Joe Boyle, Developer

Judith Bush, Citizen

Aldo Castenada, Personal Data Ecosystem Podcast and Citizen

Jennelle Crothers, Citizen

Iain Henderson, Mydex

Emily Howe, Citizen

Dwight Irving, Ph.D.

Joe Johnston, Respect Network

Liana Leahy, Citizen

Kevin Marks, Microformats.org

Drummond Reed, Respect Network

Appendix: QUESTIONS FOR COMMENT ON PROPOSED FRAMEWORK

•	Are there practical considerations that support excluding certain types of companies or businesses from the framework – for example, businesses that collect, maintain, or use a limited amount of non-sensitive consumer data?

•	Is it feasible for the framework to apply to data that can be “reasonably linked to a specific consumer, computer, or other device”?

•	How should the framework apply to data that, while not currently considered “linkable,” may become so in the future?

•	If it is not feasible for the framework to apply to data that can be “reasonably linked to a specific consumer, computer, or other device,” what alternatives exist?

•	Are there reliable methods for determining whether a particular data set is “linkable” or may become “linkable”?

•	What technical measures exist to “anonymize” data and are any industry norms emerging in this area? Companies should promote consumer privacy throughout their organizations and at every stage of the development of their products and services Incorporate substantive privacy protections

•	Are there substantive protections, in addition to those set forth in Section V(B)(1) of the report, that companies should provide and how should the costs and benefits of such protections be balanced?

•	Should the concept of “specific business purpose” or “need” be defined further and, if so, how?

•	Is there a way to prescribe a reasonable retention period?

•	Should the retention period depend upon the type or the sensitivity of the data at issue? For example, does the value of information used for behavioral advertising decrease so quickly that retention periods for such data can be quite short?

•	How should the substantive principles set forth in Section V(B)(1) of the report apply to companies with legacy data systems? A-1

•	When it is not feasible to update legacy data systems, what administrative or technical procedures should companies follow to mitigate the risks posed by such systems?

•	Can companies minimize or otherwise modify the data maintained in legacy data systems to protect consumer privacy interests? Maintain comprehensive data management procedures

•	How can the full range of stakeholders be given an incentive to develop and deploy privacy-enhancing technologies?

•	What roles should different industry participants – e.g., browser vendors, website operators, advertising companies – play in addressing privacy concerns with more effective technologies for consumer control? Companies should simplify consumer choice Commonly accepted practices

•	Is the list of proposed “commonly accepted practices” set forth in Section V(C)(1) of the report too broad or too narrow?

•	Are there practices that should be considered “commonly accepted” in some business contexts but not in others?

•	What types of first-party marketing should be considered “commonly accepted practices”?

•	Even if first-party marketing in general may be a commonly accepted practice, should consumers be given a choice before sensitive data is used for such marketing?

•	Should first-party marketing be limited to the context in which the data is collected from the consumer?

•	For instance, in the online behavioral advertising context, Commission staff has stated that where a website provides recommendations or offers to a consumer based on his or her prior purchases at that website, such practice constitutes first- party marketing. An analogous offline example would include a retailer offering a coupon to a consumer at the cash register based upon the consumer’s prior purchases in the store. Is there a distinction, however, if the owner of the website or the offline retailer sends offers to the consumer in another context – for example, via postal mail, email, or text message? Should consumers have an opportunity to decline solicitations delivered through such means, as provided by existing sectoral laws?

A-2

•	Should marketing to consumers by commonly-branded affiliates be considered first-party marketing?

•	How should the proposed framework handle the practice of data “enhancement,” whereby a company obtains data about its customers from other sources, both online and offline, to enrich its databases? Should companies provide choice about this practice? Practices that require meaningful choice General

•	What is the most appropriate way to obtain consent for practices that do not fall within the “commonly accepted” category?

•	Should the method of consent be different for different contexts?

•	For example, what are effective ways to seek informed consent in the mobile context, given the multiple parties involved in data collection and the challenges presented by the small screen?

•	Would a uniform icon or graphic for presenting options be feasible and effective in this and other contexts?

•	Is there market research or are there academic studies focusing on the effectiveness of different choice mechanisms in different contexts that could assist FTC staff as it continues to explore this issue?

•	Under what circumstances (if any) is it appropriate to offer choice as a “take it or leave it” proposition, whereby a consumer’s use of a website, product, or service constitutes consent to the company’s information practices?

•	What types of disclosures and consent mechanisms would be most effective to inform consumers about the trade-offs they make when they share their data in exchange for services?

•	In particular, how should companies communicate the “take it or leave it” nature of a transaction to consumers?

•	Are there any circumstances in which a “take it or leave it” proposition would be inappropriate?

•	How should the scope of sensitive information and sensitive users be defined and what is the most effective means of achieving affirmative consent in these contexts?

A-3

•	What additional consumer protection measures, such as enhanced consent or heightened restrictions, are appropriate for the use of deep packet inspection?

•	What (if any) special issues does the collection or the use of information about teens raise?

•	Are teens sensitive users, warranting enhanced consent procedures?

•	Should additional protections be explored in the context of social media services? For example, one social media service has stated that it limits default settings such that teens are not allowed to share certain information with the category “Everyone.” What are the benefits and drawbacks of such an approach?

•	What choice mechanisms regarding the collection and use of consumer information should companies that do not directly interact with consumers provide?

•	Is it feasible for data brokers to provide a standardized consumer choice mechanism and what would be the benefits of such a mechanism? Special choice for online behavioral advertising: Do Not Track

•	How should a universal choice mechanism be designed for consumers to control online behavioral advertising?

•	How can such a mechanism be offered to consumers and publicized?

•	How can such a mechanism be designed to be clear, easy-to-find, usable, and understandable to consumers?

•	How can such a mechanism be designed so that it is clear to consumers what they are choosing and what the limitations of the choice are?

•	What are the potential costs and benefits of offering a standardized uniform choice mechanism to control online behavioral advertising?

•	How many consumers would likely choose to avoid receiving targeted advertising?

•	How many consumers, on an absolute and percentage basis, have utilized the opt-out tools currently provided?

•	What is the likely impact if large numbers of consumers elect to opt out? How would it affect online publishers and advertisers, and how would it affect consumers?

•	In addition to providing the option to opt out of receiving ads completely, should a universal choice mechanism for online behavioral advertising include an option that

A-4

allows consumers more granular control over the types of advertising they want to receive and the type of data they are willing to have collected about them?

•	Should the concept of a universal choice mechanism be extended beyond online behavioral advertising and include, for example, behavioral advertising for mobile applications?

•	If the private sector does not implement an effective uniform choice mechanism voluntarily, should the FTC recommend legislation requiring such a mechanism?

Companies should increase the transparency of their data practices Improved privacy notices

•	What is the feasibility of standardizing the format and terminology for describing data practices across industries, particularly given ongoing changes in technology?

•	How can companies present these notices effectively in the offline world or on mobile and similar devices?

•	Should companies increase their use of machine-readable policies to allow consumers to more easily compare privacy practices across companies?

Reasonable access to consumer data

•	Should companies be able to charge a reasonable cost for certain types of access?

•	Should companies inform consumers of the identity of those with whom the company has shared data about the consumer, as well as the source of the data?

•	Where companies do provide access, how should access apply to information maintained about teens? Should parents be able to access such data?

•	Should access to data differ for consumer-facing and non-consumer-facing entities?

•	For non-consumer-facing companies, how can consumers best discover which entities possess information about them and how to seek access to their data?

•	Is it feasible for industry to develop a standardized means for providing consumer access to data maintained by non-consumer-facing entities?

•	Should consumers receive notice when data about them has been used to deny them benefits? How should such notice be provided? What are the costs and benefits of providing such notice?

A-5

Material changes

•	What types of changes do companies make to their policies and practices and what types of changes do they regard as material?

•	What is the appropriate level of transparency and consent for prospective changes to data-handling practices?

Consumer education

•	How can individual businesses, industry associations, consumer groups, and government do a better job of informing consumers about privacy?

•	What role should government and industry associations have in educating businesses?